This will be a (hopefully) short note on some early observations on AI.
One thing seems fairly clear to me - the era of generative text will not be a fad, and we should expect to see more and more of it in the coming days. And while it may not be news to most people, how will it affect the way that humans interpret information?
An idea that has been on my mind is the way that AI strongly decreases the barrier to producing things that 'look' correct. Meaning, if you want a CV, or a recipe, or a business memo, the LLMs are superbly good at producing the bulletpoints, niceties, and format of one of those items. And that is, by itself, a good thing! After all, we have our current understanding of such "standards" - from the way that a well-researched book ought to have an index and a list of references, to the way that business letters have specific locations for the receiver's address - because it makes it easier to understand a large amount of information. If each person decided to (unintentionally) do it differently, it would just take longer to get the same kind of information. By having such standards, you can quickly find what you are looking for, and - if you are the writer - can easily highlight and organize the information for others to understand.
However, format is not only used for this purpose, sadly. In fact, one easy pitfall that a student can fall into is believing something for the sole reason that it is formatted correctly. In physics, it is often a meme that people who LaTeX their problem sets will score better, just because it looks nicer and more professional. And again, while some of that might be true [1] the fact by itself is a fallacy. There is nothing intrinsic about the typesetting/format that makes it better, it's just that it is often costly to do, and therefore, is more often associated with good quality work.
And therein lies the rub. Because, we now have tools (in the form of AI/LLMs) that will indeed automatically do that formatting for us. There is no need to spend the time to make sure that each comma is in the right place, that each hanging indentation is made to be correct! That ought to be automated now, and for good reason to. Now, as the writer, you don't need to spend that additional time to crack open your copy of Strunk and White to ensure that the proper rules of grammar are being followed - that is taken care for you automatically.
But. with this ease, means that for those of us who have been conditioned to trust things that are merely formatted correctly, we will struggle. It means that we can no longer intrinsically trust good formatting to have good content. And, unfortunately, human brains are lazy! We want shortcuts, we want an easy way to evaluate the reliability of a source, and to give a gut reaction on how much weight one should give to a particular piece of text.
Today, text is cheaper than it has ever been in our history. Perhaps there was a sense of this at the founding of Gutenberg's printing press - no longer was the written word the sole domain of rulers and priests, but instead, the ability for any jokester to produce their own piece of written work meant that the standards of literacy needed to also be raised. No longer could you believe that something was capital T True just because it was typed, but instead, you had to use your own brains to decipher exactly what the meaning was, precisely how useful the knowledge was, figure out how reliable is the person providing you with that text. I think that this was just as true at the popularization of the internet as well. It was not so long ago that the idea that one could trust any online source was absurd, because of how easy it was to put something out there. It took years of reputation building and slow testing before people began to get a feeling for what could be trusted, and what could not.
In each generation, there will always be those who are fooled. When the War of the Worlds was first broadcast as a radio play, members of the public also panicked, because the style of the work of fiction was intended to mirror that of genuine news broadcasts. It's a sad reality that today, there are over 100,000 victims of elder fraud every year, in part, due to their difficulty to differentiate genuine and fake claims from authoritative voices. And certainly, such follies and scams will always occur, regardless of the medium. However, I feel that there is at least some responsibility placed upon the shoulders of those who are creating such tools - especially the tools that could leverage those who want to intentionally cause harm - to provide better education, guidance, and advocacy regarding how to understand this new world.
Even in the process of writing this note, I'm being told by my more knowledgeable friends that the current state-of-the-art in AI has already moved forwards. Already, it's not just the generic GPT model that is most of use; rather, it's the specially trained and algorithmically coupled versions that are breaking ground in industry. This certainly appeals to me deeper - having a bot that first searches in different chunks of text, and subsequently analyzes it to provide ctations seems so much more appealing to me than one that solely generates text. Again, I might just be getting behind the curve here. If we do believe that the AI is able to generate not only form, but also the content, then this argument is at least partially moot.
Yet, I still fear that in today's world, this will be used more for nefarious purposes. Sure, a lot of the focus is on disinformation - as has most of my writing here - but, just as critically, is the power of these LLMs to spread negativity and pessimism. We are products of our environment, certainly. What happens when our virtual environments are filled with hatred? Perhaps you can say that hatred is easy to identify, flag, and ignore. Fine. What about just aloof disinterest? Or apathy? Or a worldview that believes that everything is doomed? That's a bit more tricky - some Real People (c) would certainly hold those beliefs. What if those genuine beliefs were just... amplified? And spread more? Because it would be completely effortless, now, to do so.
Regulation will certainly be too slow, and we cannot rely on the makers of the tools to self-moderate. Even if they chose to do so, market forces would not allow them to succeed. There needs to be more done for education and adaptation on our individual sides, before it's too late.
[1] It is indeed easier to grade a problem set where everything is nicely typed up, and where you don't need to struggle to decode someone's handwriting! And also, usually the people who spend the time to type it up neatly are often the ones who are already doing very well in the class. If a student is not putting in the time to understand the basic principles, it is quite rare for them to still have the time to painstakingly format a problem set in heiroglyphics.