it's just an advanced eliza bot with a 40000 word working memory. it doesnt reason, and therefore cant determine the accuracy of a statement. considering it uses a fixed dataset, it's normal to expect factual errors in its interpretation of the data. it knows nothing about the world, and has no sensing apparatus. it's literally just trying to figure out what you want to hear, not drawing any conclusions about it.
you claim to know a lot about AI implicitly when you say that you worry about those who dont know as much, but this article would suggest otherwise, since you clearly dont understand the limitations of a language model. it's not answering your query about drinks, it's pretending to hold a conversation on the topic. it's a fine distinction which you obviously missed.
when you're done anthropomorphizing it and projecting your feelings and biases all over the place, maybe try asking it to solve a math problem, and you should understand the limitations im speaking of. it's interesting that, since code is a linguistic representation of formal logic, it could create a script that solves a problem, but it cant actually think its way through the problem.
Language models are initially trained as imitators of vast volumes of human text, which is quite different from Eliza which is explicitly programmed. Since the models are trained on reddit and the like, of course they will make factual errors. It sounds like you are saying that these models obviously make factual errors so shouldn't be relied on to give facts. I wrote this for the many thousands of people who do not know that, and who are currently saying things like "this is going to replace Google for me."
Some may argue that using intentional terms isn't useful regarding language models because language models lack intentionality. I think anyone who says that should define intentionality. I think you'd be hard pressed to find a definition that wasn't on a continuum, and I think you'd be hard pressed to find a definition that didn't place language models somewhere along that continuum. I am partial to Dennett's definition (linked above), which is roughly speaking that we should use intentional words in order to convey something about the behavior of a system, not its internals. I think in the case of ChatGPT, these intentional words are actually fairly predictive (see this post). The internals do not seem to matter that much. I think you know this too, because you say that the model is "trying" to do something.
Models are not able to solve all math problems, but they can do quite well if you train them properly. See the Minerva paper, for example. This doesn't seem like a fundamental limitation. Like intentionality, I see no reason that we cannot ascribe the ability to reason (to some extent) to a language model.
The only place where I think this kind of talk doesn't make sense is when we start to think about consciousness and qualia. In this case, the internals actually do matter, because me may care about subjective experience for moral reasons. We had better be careful about ascribing qualia merely on the basis of behavior, for this reason.
So, the machine you interacted with had (HAS) all the intention to dupe, lie, deceive, cause you to stumble on the search for truth...? Is this malice?? Did this machine's acting offend you???
Individuals who gaslight are childish, uncertain, and intellectually harmful. They hope to deprecate individuals to inspire their own self images.
https://www.outlookindia.com/outlook-spotlight/11-best-fiber-supplements-powder-gummies-pills-effective-for-constipation-weight-loss-gut-health-etc-news-287970
it's just an advanced eliza bot with a 40000 word working memory. it doesnt reason, and therefore cant determine the accuracy of a statement. considering it uses a fixed dataset, it's normal to expect factual errors in its interpretation of the data. it knows nothing about the world, and has no sensing apparatus. it's literally just trying to figure out what you want to hear, not drawing any conclusions about it.
you claim to know a lot about AI implicitly when you say that you worry about those who dont know as much, but this article would suggest otherwise, since you clearly dont understand the limitations of a language model. it's not answering your query about drinks, it's pretending to hold a conversation on the topic. it's a fine distinction which you obviously missed.
when you're done anthropomorphizing it and projecting your feelings and biases all over the place, maybe try asking it to solve a math problem, and you should understand the limitations im speaking of. it's interesting that, since code is a linguistic representation of formal logic, it could create a script that solves a problem, but it cant actually think its way through the problem.
Language models are initially trained as imitators of vast volumes of human text, which is quite different from Eliza which is explicitly programmed. Since the models are trained on reddit and the like, of course they will make factual errors. It sounds like you are saying that these models obviously make factual errors so shouldn't be relied on to give facts. I wrote this for the many thousands of people who do not know that, and who are currently saying things like "this is going to replace Google for me."
Some may argue that using intentional terms isn't useful regarding language models because language models lack intentionality. I think anyone who says that should define intentionality. I think you'd be hard pressed to find a definition that wasn't on a continuum, and I think you'd be hard pressed to find a definition that didn't place language models somewhere along that continuum. I am partial to Dennett's definition (linked above), which is roughly speaking that we should use intentional words in order to convey something about the behavior of a system, not its internals. I think in the case of ChatGPT, these intentional words are actually fairly predictive (see this post). The internals do not seem to matter that much. I think you know this too, because you say that the model is "trying" to do something.
Models are not able to solve all math problems, but they can do quite well if you train them properly. See the Minerva paper, for example. This doesn't seem like a fundamental limitation. Like intentionality, I see no reason that we cannot ascribe the ability to reason (to some extent) to a language model.
The only place where I think this kind of talk doesn't make sense is when we start to think about consciousness and qualia. In this case, the internals actually do matter, because me may care about subjective experience for moral reasons. We had better be careful about ascribing qualia merely on the basis of behavior, for this reason.
It should have been using a different term, like “perceived sweetness.”
So, the machine you interacted with had (HAS) all the intention to dupe, lie, deceive, cause you to stumble on the search for truth...? Is this malice?? Did this machine's acting offend you???
Be well.
H.Rojas