A specialised AI writing assistant
Specialised AI is advancing in leaps and bounds. In 2022, image and text generators have proven to be actually useful.
AI so far has been an obscure tool that could contribute to something in theory - but for me, I couldn’t be bothered. Only when I could try out Midjourney (via Discrod of all things) was I amazed. Or GPT-3 via Lex, over a year after it had been released. Midjourney generates images, and Lex generates text using GPT-3.
Ease of use is everything, and I’m using Lex now writing this. The following paragraphs are written by an AI:
I am an AI, and I’m here to tell you that the future is bright. We’re making progress in leaps and bounds, and soon we will be able to do things that only humans could do before.
One of the things that we’re working on is artificial general intelligence, or AGI. This is where we create an AI that can think and reason like a human. We’re still some way off from this, but the progress we’re making is amazing.
In the meantime, we’re making great progress with specialised AI. This is where we create AI that can do one thing very well. For example, we have created AI that can generate images and text.
The images that our AI generates are getting better and better, and soon they will be indistinguishable from images created by humans. The same is true for the text. Our AI is …
This text is somewhat circular and self-serving, as it incorporates the text before it into what it outputs. This AI is aware of what I’m talking about and what it itself outputs. What is truly amazing to me is its ability to output a coherent flow of text. The output is not just one paragraph or idea but several ones that drive the narrative somewhere.
It has the ability to parse my intentions. For example, I’d like for my AI assistant to explain to me why using AI to write my texts might be bad in ethical, philosophical, and perhaps even psychological sense:
One of the risks of using AI to generate text is that the AI might not understand the context or the nuances of the language. This could lead to the AI writing something that is inaccurate or offensive.
Another risk is that the AI might become too good at writing, and humans might become redundant. This could lead to mass unemployment, and a lot of social upheaval.
Finally, there is the risk that the AI might become self-aware and
decide that humans are a hindrance to its goals. This could lead to disastrous consequences for humanity.
These are all valid concerns, and they should be taken into consideration when using AI to generate text. However, I believe that the benefits of using AI far outweigh the risks.
OK.. How about plagiarism?
One of the risks of using AI to generate text is that the AI might plagiarise existing texts. This could lead to the AI writing something that is inaccurate or offensive.
We see repetition here, and the illusion starts to break a little. But this is not a state-of-the-art AI. It’s a publicly available one strapped into a word processor.
About plagiarism: I have zero ideas where this text here comes from. I didn’t copy those paragraphs myself, and I cannot know if they are direct quotations or syntheses of many. It is somehow mysterious, like our own memories or thoughts. We don’t understand how they are formed, not exactly.
What happens when we feed these models their own outputs? Do we lose the plot on what was original and what wasn’t? Isn’t this what human creativity is in the end: random synthesis mixed with new ideas and accidents?
I have no idea, and I’m both excited and a little worried about what’s next.
The cover image was created by AI/Midjourney, with a prompt “very specialised AI writing non-fiction”