After being astounded by ChatGPT’s potent features, the entire Chinese Internet quickly got involved in a discussion about how ChatGPT would affect employment. Under the impact, people’s worries and anxieties quickly seeped into their lives, one of which affected students’ major choices. “Change your major.” Friends and parents of a friend of mine who recently passed the college entrance exam and enjoys journalism and wanted to study journalism told her that ChatGPT will gradually replace the news editing industry. However, is this really the case?
It is irrefutable that there are now many reports about LLMs such as ChatGPT beginning to “replace” editors and reporters. The American technology news company CNET discovered a few weeks ago that it had been using AI to write explanatory financial reports since November last year. In early March, it was revealed that CNET was reducing its workforce significantly. Although the company stated that the layoffs had nothing to do with the use of AI in the newsroom, RedVentures, the consortium that recently acquired CNET, stated that it “wanted to simplify operations and the tech stack.” CNET’s former editor-in-chief also became “Vice President of AI Strategy”. Also in early March, the CEO of Axel Springer (a German publishing company that owns many well-known newspapers and magazines such as Politico and Business Insider) stated in an internal communication that AI tools such as ChatGPT would set off a revolution, adding that these tools “will soon be better at compiling information than human journalists.” The company also issued a statement stating that its newspapers, such as Bild, will lay off employees because AI will soon turn a large number of employees into redundant staff.
Although it seems that “ChatGPT replaces editing reporters” has become a trend, is this trend really reliable? In other words, can this type of “AI tool” really completely replace “text editing” in terms of functionality? Putting aside everything else, from my own personal experience, at least ChatGPT is not a qualified substitute.
First of all, ChatGPT told me that its “knowledge deadline” is January 2022, which means that it “does not have enough information” about many news stories that occurred before 2022. For example, I want to ask about the news that scientists created a “chimeric” monkey baby on November 9, 2023. Its response was that it didn’t know and a thoughtful reminder that “my information may be out of date, and scientific research may progress.” Secondly, ChatGPT loves to provide fake information and create fake URLs. Whenever I want it to give me some literature on my research question (including authors and links), it always gives me the wrong information. For the question I asked, one literature title is wrong, only one link is correct, and the others are irrelevant or invalid. This is because LLMs such as ChatGPT, GPT-3.5, and Google’s LambDA are designed to learn the language itself, not to obtain or understand information. After my question was inputted, ChatGPT’s answer was the most statistically reasonable answer after counting hundreds or hundreds of billions of words. ChatGPT does not know how to judge. All it will do is constantly use statistics to predict the next word. There is a model in computer science called “stochastic parrots”, which are algorithms or models that can only create grammatically reasonable strings or sentences but do not really understand the meaning of the words. ChatGPT is such a model that has no understanding ability. However, the public is easily confused by its smooth surface and thinks that it seems to be “aware”, but the fact is that it cannot understand the context and logic of natural language. Therefore, the writing jobs suitable for ChatGPT are those that are routine, template-based, and do not require uniqueness, such as appointment emails or formal letters. The best way to produce text is to first feed it some text and let it straighten it out, or let it summarize some articles, but it seems that such reports are almost rigid and boring.
Although we can all generally say that “writing well” means writing smoothly with depth, content, and unique insights, none of these are clear goals that can be easily quantified. Competing for readers’ attention can be said to be the core competition in the news field. Without creativity, it is difficult to make people shine. And I think it is not so much that ChatGPT is “completely uncreative”, but more like having a very limited “creativity” that relies heavily on existing dimensions—more like the ability to imitate, splice, and combine. Because it can rely on the characteristics of enough data to be trained, it can generate science fiction novels in the text style of Mark Twain or horror animations in the style of Hayao Miyazaki. We can compare it with the Go AI algorithm, AlphaGO. Go is not like art or literature. It has a very clear goal, which is to have more moves than the opponent at the end of the game. Therefore, its algorithm will not resist “innovative” moves, and AlphaGo Zero no longer relies on human “fed” game records as data. It has made enough progress to fight left and right. So it’s not surprising that he made moves that no human has ever made before. It can be said that ChatGPT cannot “break through” what already exists, or it has no imagination. The purpose of its calculation is to calculate the probability of the next word from existing articles—saying what many people have said. This is why the things written by ChatGPT are so much like propaganda machines, because they are clichés.
Computer scientist Ian Bogost, writing in The Atlantic, also pointed out that ChatGPT’s shortcomings will lead to more work:
“Whatever ChatGPT and other AI tools end up doing, they will impose new labor and management regimes on the labor required to do so-called labor-saving work.”
The result is to add more types of work for mankind. Regardless of whether this “technological solutionism” is good or bad, the relationship between artificial intelligence tools and humans does not seem to be that the former completely excludes the latter. So my conclusion is that the current practicality of ChatGPT makes it basically unable to replace editors and reporters.
Meredith Broussard, director of research at New York University’s Technology Alliance for the Public Interest, coined the concept of “technochauvinism”:
“When you predict that a new technology tool will usher in a new era, ask yourself: Will it really happen? The overhyping of technology tools is a bias, a bias I call ‘technochauvinism’. This bias assumes that technical solutions are always better than other solutions.”
We are obsessed with tools and fearful of them, forgetting that technology will not do it for us. Creating a utopia will most likely not bring us to the end of the world; technology can do a lot for us, but it cannot do more for us. “Technological systems will not eliminate social problems, but they will shift and mask them,” Broussard said. While this conclusion may be old-fashioned and somewhat disappointing, it’s important to consider what the world will look like tomorrow and what these tools will ultimately bring to our lives. What has changed? The answer has not changed since the beginning of human society: it is human beings themselves.
Credit to the Original Article | Explore More of Their Work If You Found This Article Enjoyable.
https://news.google.com/rss/articles/CBMiZWh0dHBzOi8vbWVkaXVtLmNvbS9AYzEzMDE0NXR5cS9pcy1pdC1wb3NzaWJsZS1mb3ItY2hhdGdwdC10by1yZXBsYWNlLWVkaXRvcmlhbC1yZXBvcnRlcnMtMzdlNjU0OTU5NGRm0gEA?oc=5&hl=en-US&gl=US&ceid=US:en

:quality(85)/cloudfront-us-east-1.images.arcpublishing.com/civicnewscompany/N4D4RWDJLZEUPDDEVJEDG2F46Q.jpg)
