View Single Post
  #6  
Old 01-21-2023, 10:31 PM
BobC BobC is offline
Bob C.
 
Join Date: Apr 2009
Location: Ohio
Posts: 3,275
Default

Quote:
Originally Posted by dnilgis View Post
I appreciate your reply, I fed my original post and also your response into chatgpt and the following was the result;

I understand your concerns about the limitations and potential drawbacks of using AI tools like ChatGPT. It is true that the model is trained on a large dataset of text from the internet, and the accuracy and relevance of the information it generates can vary. Additionally, the model is still in a testing phase and there may be bugs or limitations in its performance. However, it is important to note that the model is not intended to replace human knowledge or expertise, but rather to assist in tasks such as language generation and understanding. As a language model, ChatGPT is not able to access your computer and it is not a tax software, so the scenarios you described do not apply to it.

It is also true that AI-generated content can be used for academic dishonesty, that's why it is important to use it responsibly and ethically. The model is not a replacement for learning and understanding a subject, but rather a tool that can assist in tasks such as language generation, language understanding, and other related tasks.

It is also important to note that AI and Machine Learning is still an evolving field and the capabilities are rapidly changing. It is important to stay informed about the latest developments and use the technology responsibly and ethically.

#notai im back, I read somewhere yesterday that the associated press (ap) has been generating articles for 18 months and passing them off as a journalist written pieces....no one knew the difference, think about that.
I was well aware it was in a test phase. My son told me something about how when he was looking at and accessing it that they wanted him to accept tokens or something on his computer. Don't know any more than that. And the story about tax software isn't supposed to be a specific story, just an indication, whether you want to admit it or not, that many people often bypass actually learning things and come to rely upon these wonderful computers, apps and so on, instead. And then if for any reason these computer programs fail, get hacked, or otherwise get compromised and don't work, then what do people do. I get the story about the Associated Press using something like this to generate stories. But what the hell does the AP do then if at some point down the road the software fails, and there's no one left around that can actually write the stories? i may sound like the grumpy old man yelling at people to get off my lawn, but one thing I have learned is that if everyone thinks these wonderful AI and software aides will never fail or cause problems, they haven't been around long enough.

I'd at least wait till the testing period is completed, and there is some more feedback available before jumping in to use this, or at least make sure I knew what to do so that if the software did fail, I could still do it myself, or at least find another way around to get what I needed done.

https://www.zdnet.com/article/what-i...-need-to-know/

https://www.nbcnews.com/tech/tech-ne...orks-rcna64446

Remember the movie Wall-E, and how all the humans on the spaceship looked and the shape they were in. Just what we need, more aides to do every little thing for us so we don't even have to do things like think and type are own emails and responses anymore. Seems like doctors and the medical profession just keep telling all of us how we humans need to be doing more activities and using our brains and hands to help keep our minds healthy and active, especially as we get older. Things like this ChatGPT seem to do the polar opposite.

Last edited by BobC; 01-22-2023 at 09:42 AM.
Reply With Quote