Прочитайте текст и выполните задания 12–18 . В каждом задании запишите в поле ответа цифру 1 , 2 , 3 или 4 , соответствующую выбранному Вами варианту ответа.
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT made such big headlines. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered this ability yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?