> I just find it amusing that people in the ML field are always so adamant that ‘it doesn’t know’ or ‘it’s not thinking’ or ‘it doesn’t understand’, but I don’t see much engagement with what is actually meant by ‘know’, ‘think’ or ‘understand’ in the first place.
Exactly. This is why these experts should shut up about claiming these AI models don't know things, or aren't sentient. We don't have any mechanistic explanations for what knowledge is or how it works, or what sentience is or how it works, so all of these claims are just bullshit. At best we have some hand-wavy intuitions about what some properties of knowledge or sentience have (Gettier problem, etc), but that's nowhere near enough to make the definitive claims I've seen.
Exactly. This is why these experts should shut up about claiming these AI models don't know things, or aren't sentient. We don't have any mechanistic explanations for what knowledge is or how it works, or what sentience is or how it works, so all of these claims are just bullshit. At best we have some hand-wavy intuitions about what some properties of knowledge or sentience have (Gettier problem, etc), but that's nowhere near enough to make the definitive claims I've seen.