An AGI is something you give tasks to and it can complete them, for some collection of tasks that would be non-trivial for a human to figure out how to do. It's unclear at this point whether you could engineer an AGI, and even more unclear whether the AGI, by its nature, would be "sentient" (AKA, self-aware, conscious, having agency). Many of us believe that sentience is an emergent property of intelligence but is not a necessity- and it's unclear whether sentience truly means that we humans are self-aware, conscious and have agency.
Let's say I give your AGI (which is not self aware and does not have a conscience) a task.
The task is to go and jump off the bridge. Your AGI would complete this task with no questions asked, but self-aware AGI would at least ask the question "Why?"
AGI would be something that is able to do many different tasks without needing to be specifically built for. It could learn and do. Like a person. It can learn to drive car and then learn to iron clothes.
It does not need to be self aware. Can still be considered a machine.
People think if we get to AGI, maybe we'll get to self awareness. But we won't know that until it happens. We don't fully understand how sentience works.