Tablet menu

Crazy hands and “glassy” eyes: What do neural networks still draw badly? How to avoid common mistakes?


A selection of problems in AI creativity.

The boom of neural networks is a trend of recent years, which gained momentum at the end of 2022. Artificial intelligence (AI) has become available to a wide range of users as an opportunity to generate unique images and illustrate the craziest fantasies, so there is already talk on the Internet that the profession of a designer has become obsolete, because the technology already draws better and faster. At the same time, there are several problems that the creators of neural networks have not yet been able to solve to the end - we spoke about them in his material.

crazy hands

The main reason why users criticize neural networks is the inability to draw hands. AI does not learn the shape of the human hand and adds extra fingers or, conversely, draws "dinosaur paws", and generally distorts the limbs. Sometimes neural networks even draw an extra arm or leg entirely. Moreover, the drawing style does not matter here: crazy hands can be seen both in realistic images and in “cartoons”.


The fact is that AI does not comprehend references in terms of anatomy and human perception. In addition, in many sources, the hands are shown from different angles, so that a different number of fingers is visible - and when there are many objects, but an unclear number, the machine produces a random result.


Poorly drawn hands are the subject of many memes on the Internet. Users create entire profiles and communities in which they publish strange AI creativity.

Although neural networks have recently trained to draw a sufficient number of fingers, the result is still far from reality: hands often come out disproportionately small or large, fingers are long, and joints are curved in an unnatural way.


Extra teeth, tongues and jaws

Sometimes a neural network draws teeth and other details of the mouth no better than hands: a mouth in a mouth, a jaw on a jaw, crooked teeth or an insane amount of them. As in the case of limbs, the AI ​​has no idea how to draw many similar objects within one, and why the end result does not look natural.

Often, the neural network will draw the teeth as unrealistic or sticking out incorrectly, but a wide smile or an open mouth is a big risk to ruin an overall interesting art with one detail.



"Something with the eyes"

This problem was demonstrated by numerous experiments with the Lensa neural network, in which users made avatars for themselves in various styles from photos. Many have complained about squinting eyes and a strange look in the images. Moreover, it was especially disappointing to reject such pictures - otherwise, many of them came out beautiful and bright.

As noted by Medialeaks, the highest percentage of "eye" marriage occurs in images of a person with glasses. On the other portraits they are shown more correctly.


However, even on art from more advanced neural networks, the eyes can be drawn normally and directed at the viewer, but as if out of focus. The look is "glassy", inanimate. These are the arts posted by Midjourney users in an open chat on Discord.


The problem persists even when AI is designed to solve the problem of human eye contact with the camera. Nvidia recently released a technology based on a neural network that allows you to create a simulated eye contact on video if a person is not looking at the camera. It looks as if the AI ​​just changed the location of the eyes - the look is as inanimate and defocused as in many pictures from neural networks.

Also, "deepfake" eyes move unnaturally fast - this was noted by The Verge journalist who tested the new feature. For part of the video, he is looking at the camera, so the viewer can see the difference between what it looks like before and after AI processing. The presence and absence of glasses did not affect the quality of the image.

Cropped heads and twisted faces

It also happens that the neural network generates art with a person with a truncated head or without it at all. The fact is that when uploading references, people add pictures that are not full-length or those where the torso occupies most of the space - the neural network gets confused and begins to consider the torso the most important part of the portrait, allowing itself to cut off the head.


However, worse than a cropped head can only be a badly drawn head with a twisted face. This is not uncommon in AI work. The neural network can unnaturally arrange parts of the face and merge it with space. Technology has no concept of “natural”, and since all people are different, mistakes are inevitable when searching for universal solutions.


Blurred details, spots and lines

It happens that the art from the neural network seems beautiful and detailed, but if you start to look closely, it turns out that no specific details are drawn there. This problem is more common than it might seem: instead of flowers - spots, instead of birds - spots, instead of inscriptions - lines and spots. The forms resemble what should be depicted, but nothing in essence can be seen in such pictures.



How to avoid common mistakes

For ordinary users who generate content in neural networks, technology lovers are advised to avoid drawing hands (this option can be selected in the interface of individual AIs, including Midjourney) or depict them so that they are busy. Also, detailing the hands will increase the chances of success: it is better to describe in more detail how the fingers and the skin on them look.

To prevent the neural network from cutting off the heads of the characters, you can upload full-length references. Asking the AI ​​for a full-length image of a person also helps avoid cropping issues. In addition, detail can improve the result - it is worth describing what the character is doing, his movements and the positions of the limbs. But at the same time, you should not ask for portraits and add a positive assessment: the point is that AI considers images of people with cropped heads more attractive.

The technology is still learning, and many works now more correctly depict a person and real objects. Whether there is a place for human designers in the future is an open question that can only be answered years later, watching the development of artificial intelligence.

Last modified on
back to top