Text to Art: Using NLP to Create Art
From finance and marketing to higher education, AI has taken over the entire world. The world of creators and artists is no different. If someone told you a few years ago that you can convert text to art, you would probably call them mental and laugh off the idea. Today, however, it is a reality that has opened doors for tons of benefits for the artist community.
The idea has become immensely popular due to the appearance of NFTs in the market. NFTs have started an era of digital artwork, allowing the artists to tokenize and sell their work to a quickly growing marketplace. For instance, Art AI has recently introduced Eponym, a software that allows text to art conversion, and direct creation of NFTs.
Since digital artwork is the new tomorrow, we have decided to explore the concepts of converting text to art by using NLP (Natural Language Processing). So, let’s begin, shall we?
What is Natural Language Processing (NLP)?
Natural Language Processing is a branch of Artificial Intelligence (AI) that allows computers and machines to understand texts and spoken words. NLP allows machines to understand the actual meaning of the data, including sarcasm, sentiment, and intent.
With NLP, humans can communicate with the computers in their intrinsic language, and by using statistical, machine learning, and deep learning models, NLP interprets the command and produces the results.
While the concept may sound too complex, you will be surprised to find out that you have encountered NLP several times in your life, in fact, in your day today. Did you use voice-operated GPS while going to work today? Or did you need Google Translate to find out what your colleague was telling you in his native language? From virtual agents, like Siri and Alexa, to customer service chat boxes that pop up when you visit your favourite websites, NLP is everywhere!
Why is NLP being used to Convert Text to Art?
Artists and creators are known to be unique personalities that have god-gifted methods of expressing themselves. On the other hand, AI is a highly logical concept with no space for emotion whatsoever. Combining the both might seem illogical, yet it is a concept being widely accepted due to the advent of NFTs.
Don’t know about NFTs? Visit Vidalgo, where we talk about NFTs, their market growth, trends in this field and much more!
NFTs have converted artwork into a source of money. And like any other money-generating source, people are racing to double it. While traditional artwork might require days to create, an NLP-based artwork can take minutes, if not seconds. It has also opened doors for non-artists to enjoy the OpenSea marketplace. You no longer need to be a professional to earn from NFTs; you just have to have a thought and voila! –you have your art.
But What about Creativity?
Many people see the use of AI in creation as a threat to the creators. While the artist community is still an underdeveloped and under-appreciated part of the world, there should be no doubt about the fact that, like any other community, they need to keep up with the world.
According to Forbes, “it is becoming harder and harder to deny that artificial intelligence is capable of creativity.” Yet there is no doubt that this creativity stems from the creativity of a human being himself.
NLP-based artwork can never take away an artist’s job because an artist is an initiator for it. His creativity is the only thing such software can display. Without his vision, NLP-based art is nothing. Furthermore, a human is needed to remove racism, hate or inappropriate elements from the artwork that logic-based software may generate.
DALL-E –A NLP Project by OpenAI
DALL-E is an extension of GPT-3, which was a language model that created human-like text. The results of GPT-3 make text written by humans and machines indistinguishable. Working in the same line, OpenAI introduced DALL-E, which can not only generate an original image but can also extend an image towards the bottom if commanded. Just like GPT-3, DALL-E is a transformer. A transformer uses the attention mechanism, giving different levels of importance to different parts of inputs.
The word DALL-E is the scientists’ way of giving tribute to the Spanish artist Salvador Dali and Pixar’s beloved 2008 robot, WALL-E. From the looks of it, DALL-E can interpret commands that are far from reality, for instance, “an arm-chair in the shape of an avocado.”
The fact that this AI can combine entirely unrelated textual commands and convert them into an art piece can make anyone dumb-founded, and that’s what scientists at OpenAI feel like. Since the field is quite unexplored, their creation has several limitations. The software creates a different image or piece of digital art every time the user rephrases the same command. However, the breakthrough is big enough for anyone to ponder too much on the limitations for now.
How does DALL-E convert text to art?
DALL-E requires user input in the form of text and an image input from a program called CLIP. DALL-E divides the input into tokens. For a human being, each letter of the alphabet is a token. For DALL-E, there are 256 BPE-encoded tokens and 1024 image tokens. When a stream of input goes into the program, it divides it into a maximum of 1028 tokens. It then tries to generate the details present in the maximum number of tokens individually.
In this way, the program tries its best to create the maximum number of details mentioned in the input caption. However, it does not always interpret the text correctly; for instance, it cannot create shapes such as a pentagon. Also, it has trouble creating a larger object sitting on a smaller one or correctly understanding the words “standing left off”, “standing below”, and of the likes.
Similarly, the program can create word strings. However, the longer the string, the lesser will be its accuracy. The program also has trouble distinguishing between colours, but don’t we all?
The Potential of NLP and AI Artwork on the Future
With DALL-E and its successors, artists of every kind will be able to fare better. They can save their time and supplies. Corrections to their artworks will be easier, and brainstorming will be a piece of cake.
Fashion designers no longer need to rely on digital or pencil sketches, and they can just tell the program how many flares the skirt should have. Logo designers no longer need to put an effort to place their logos on several cards and shop fronts to check how they look, and they can tell the program to create their design with multiple backgrounds.
Similarly, game designers can create their otherworld characters and see exactly what kind of beak fits in the place of a nose. The possibilities are endless! So, are you ready to explore them?
Visit our blog for more information about machine learning and deep learning in animation and artwork.