comparemela.com

With new outfits, but there is one caveat she was generated, she does not exist, Artificial Intelligence is increasingly erasing the line between the real world and the fictional world, and this is not always a harmless story, in the next video, the mother of a schoolchild near moscow allegedly receives a call from her son, although the child is next to her in the room at that very moment, here i have children, show me the phone, yes grisha, mom , were raising money for a friend in our class. According to the wall street journal, in the united states, the number of fraudulent schemes using deepfakes has increased almost eight times in the last year alone. Is it a joke . Scammers can now copy the identity of anyone. For example, they recently tried to to deceive in a similar way a fraudster who contacted me via video link pretended to be the mayor of moscow. How are fakes created . Why study them . By what signs can you recognize a digital mask . What solutions are scientists looking for. To another person, john calhoun, Vice President of the united states. And this can be explained very simply lincoln did not like to be photographed and there were very few photographs of him. This is probably one of the first examples of face replacement. Why did deepfakes become so widespread precisely with the development of generative Neural Network models . Andrey. Yes, alexander, this is a very interesting example, and in general, its probably worth delving into history and looking at how technology developed. And how they found application. For various problems that a person solves, here is the situation that you just described with Abraham Lincoln, you yourself specifically emphasized that he did not like to be photographed, that is, in essence, we helped a person with this technology, that is, if this had not happened , then everyone would think that Abraham Lincoln would have existed at all if they had talked about him only from the point of view of photography only portraits. Of course, in this situation , no one thought about any bad, lets say , bad connotations of using technology. And there are a lot of such examples, we now, having technology, having social networks, having cell phones, have great capabilities, in almost every application related to Image Processing there are a bunch of builtin filters, a bunch of builtin technologies, algorithms that allow you to improve the face , that is, these are the tasks of beauty, these are the tasks of correction and color correction, tone correction, all this in fact, to one degree or another, it distorts the face, so i. Believe that if we talk about deepfakes, we need to look at it more broadly, that this technology, it allows us to help solve various problems, this technology is already actively used in cinema, in the filming of advertising, this makes it possible to reduce the cost of an actors time, who, for example, may be very popular and cannot simultaneously act in several films, and of course with the development of Computing Power, with the development of hardware, with the development. Of new architectures, the emergence Generative Adversarial Networks in 2014, all these methods began to move into the realm of use by ordinary people, making it possible to create new tools so that people could process photographs, videos and thereby simplify the solution of certain problems. I see, thank you, but still, what is a deepfake in the modern world and is it possible to give some definition . Fake, in fact, if you take any of the modern definitions, they differ one way or another, but i would define deep fake as the result application of technology, voice modification or synthesis. Videos and images, that is , multimedia content, and within the framework of this definition we say, if we are talking about pictures and videos, we are talking about a local change, that is, the entire scene remains original, and we replace either the face or the head of some from characters in a photograph or video, if we are talking about deepfake synthesis from an audio point of view, then this is usually voice cloning. This is timbre cloning in order to synthesize the speech of a certain person, lets still try to talk about how the technology for creating deepfakes actually works, so we can sort this out somehow to make it clear, so let s start with this, the problem is solved as follows we want to transfer a face from the source to the target image , accordingly, we need to highlight certain characteristics in each of these images that come to. The input, if we are talking about the source, then as a rule, in science this is called identity, that is, the identity of a person, that is, his distinctive characteristics that the network , by extracting the correct features , forms in the form of a certain vector, that is, different photographs of the same person are taken, and we train an uneven network so that at the output we get more or less the same vector, and if these are different people, then different vectors should be obtained, this is the logic, yes the logic is this but at the same time, there are two key directions here, such major directions the first is extraction from one photo, when we have only one photo, we dont have a large amount of data about a person, we need to learn to extract maximum knowledge from it in order to transfer this knowledge to a new person, in this sense there are two key components the first is an extractor of these features, this is a model that can. Various characteristics, including the direction of gaze, the stability of the identity of the face, that is, how recognizable it remains after transfer, because the geometry of the face has a very strong influence, it is very important when transferring, this is the complexion, this is the shape faces, textures and so on, that is, all these characteristics, each of them reflects a certain ability of models to correctly extract these signs from the source, it is as if scientists themselves find these correct signs or these correct signs are determined during training, in this in fact, the essence of deep learning, that is, who finds these features that should stand out, and in fact this task is very important from the point of view of mathematics and from the point of view of science, because there are a lot of features in a face that need to be extracted, a person can make completely different movements, and in order to learn to understand them, learn to digitize them, in order to transform them from the physical world into a digital one, deep Neural Networks are actually used, and in fact, even if you look at the definition of deip fake, then here is the first the word deep, it can be interpreted in different ways, deep as deep creation of a fake, that is, the creation of a very detailed and highquality one, at the same time, deep can be attributed to deep learning technology, which allows you to create an extraction apparatus these features, and here the real challenge is for scientists to firstly collect large amounts of data, that is. We need large arrays of highquality images of faces, we need to be able to train models in order to extract different features of shape, color, texture , everything that is connected with a persons face, during extraction, these features are subsequently used in order to best superimpose one face onto another, if we are talking about this particular device, andrey, great, but thats what you now youre telling us and. Not only artists and musicians use it for their own purposes, but attackers can also use it, so you can tell us a little more about this aspect, well, in fact, any technology has two sides, it. Can be used for good , and to the detriment, from the point of view of various situations in which deepfakes were used to compromise someone and to create some kind of negative images, there are a lot of such examples, you just need to log into some search engine, enter deep fake there, he disgraced someone or something else, you can find a lot of such content, literally one of the latest news that i read on the internet is news about. As far as i remember , baltimore, where the fired employee decided, therefore, in revenge on his former leader, to make a deepfake with his face, superimposing his face on his own, to release various video messages that discredited him. He was, of course, caught, discovered, identified, well, in fact, there is then some kind of process of punishment for this, so people will always use technology if they have it. Lets see how dipfey Technology Works online. My colleague, head of the Information Security department at innopolis university, mikhail seryogin, joins the conversation. Mikhail, hello. Mikhail, tell me, which deepfake technologies are currently the most popular among scammers video or audio . Good afternoon. In fact, this technology is not so often used for , for example, audio fakes, because it is much easier to call. On behalf of a stranger us, but a very influential person from the sberbank Security Service in order to convince us that we somehow have to transfer some amounts of money to someone, but if this technology is used, then it is often used most in those things where it is best revealed, this is for visual fakes, because visual information is very convincing, that is, for example, there was one case when scammers took an office, furnished it the way Police Offices look, and imprisoned a person in the Police Station uniform, and well, they pulled a mask over his face. An experienced it company, he decided to bring another friend of his into this field, who had recently completed courses and no longer had the relevant experience. Together, they created a lot of different photographs, a storyboard, the faces of his friend, trained a Neural Network, this , by the way, it took several weeks, in fact, a neural model was ready, with the help of which it was possible to put his friends own face on the face of an experienced person, in this combination they called in different ways, well, exactly how the experienced person called ill erase it. All hollywood has been watched, how can you recognize a deepfake . Well, here you need to catch some nonstandard situations, but for example, let me now from the same thing, lets go back to mr. Bean again, this is what it looks like with glasses, that is, you probably noticed various artifacts when i put on these glasses, neurosius is trying , shes trying to draw this characters face, even though he didnt wear glasses, she seems to be doing it not so badly, but well , i wouldnt say its good, so if. Turn the face to drive away, for example, that is you are yours, if you suspect your interlocutor that he is not real, you can try to put him in a nonstandard situation, yes, that is, turn him somewhere to look, yes, or, for example, ask him to scratch, scratch his nose, yes, in in principle, maybe somehow lay hands on it, yes, there are many different ways, somehow, well, you need to bring out exactly the standard situation, where a person speaks simply in direct text to the camera, in which case you will see exactly all the necessary artifacts. Mikhail, everything you we were shown what you did on your work computer, without any additional equipment. Yes, this is precisely why this technology is so surprising, even moreover, its not that accessible, its even a little outdated, im more than sure that within a few months some very rich Neural Networks will appear in terms of number of parameters and work efficiency that will be able to bring photos to life even better, and not that the video that was now shown to you here is precisely through flow. Video cameras, so this Technology Turns out, it is available, it has been tested, it is not even one year old, but it is not difficult to master and its Computing Power does not require any very large resources, that is , it is available here, well, i would i didnt say , of course, that for any laptop or computer, but a gaming computer like this will cope with this task with a bang. Thank you mikhail, the head of the Information Security center of enopolis university, mikhail seryogin, was with us. Lets go back to the studio, andrey. Straight to i have a question for you is it possible to make some programs, for example, for smartphones, that will allow you to determine that it is a deepfake . In fact, now the community of researchers is actively engaged in this area, in fact, this area has been developing for quite a long time, about 20082009, these technologies are called forgery detection , in fact, and this area. Is precisely related as a countermeasure to synthetic these fakes, forgeries, whatever it is, in fact, these algorithms allow you to extract special signs invisible to the human eye in the facial area, which characterize the presence of something unnatural in these areas, that is, the human eye can see a normal face, but if Artificial Intelligence is used as some. Which looks at this picture a little differently, from its own mathematical angle, extracts special values ​​from the face area, then you can make a fairly high probability decision that it is a fake, and such algorithms exist, many wellknown groups in the world in russia are developing such models, and in fact this is one of the ways to increase trust or confidence. How to increase trust in Artificial Intelligence in this sense. Why . Because we are all sitting on social networks now and reading the news, watching everything that is shown to us on videos, in photographs, and of course, a persons brain works in such a way that he perceives visual information very quickly, there is very little criticality in this, the presence of which some automatic methods that could. Recognize that this is still not eroset, and not a real picture, video, voice, but one of the attributes of the fight against deepfakes is, including the socalled watermark, that just all the decent Neural Networks that are engaged in generation, they must leave some kind of sign, maybe invisible, that it is here is the content generated not by russia, so that the same attackers who want to take advantage of this, well, they have no options, unless they know the source code there, so to speak, in order to change something. This is one option, but the second option arises then now, if we proceed from what you said, then the opposite trend arises, that is, since there are tests that allow. To detect deepfakes, it means, accordingly, new generations of networks will try to satisfy this test, and this an arms race, like a cheetah and a roe deer, that is, accordingly, tests are becoming more and more advanced , well, the corresponding Neural Networks are becoming more and more advanced, then the question arises, is there a limit to this, that is, where are we going, how far can this go . Us before the fact that it will be very realistic and we will not be able to distinguish it from yours. Look, what are the prospects . In fact, now video and image synthesis technologies are really reaching a level where it is very difficult to distinguish a real picture from a synthesized one, you mean sora, including sora, and if we talk about pictures, then mi journey is one of the solutions that generates photos in perfect quality and faces are so realistic that they cannot be distinguished, but. This is a recently released litter from open ai, which showed such realistic content that without even a close examination of the video recordings it is very difficult to find any traces there, lets say that this is still a synthetic about a kitten vacuum cleaner that rides on a vacuum cleaner, including about kitten vacuum cleaner, when there only if when storyboarding the video you can watch and look so closely at the contours, that is, i always. Its contours should not change unless it is camera distortion, which is why, of course, there is already realistic content, sometimes it is very difficult to distinguish it from the synthesized one, that is, in fact, everyone has been going astray for a long time. Content and of course here you need to learn to look at everything that you already see closely, not to trust what you can directly see in the picture, it may not be at all what it seems, in fact, probably the main question for tv viewers is how not to become a victim of such scammers who have such technologies, yes alexander, this is a really important question, when a colleague wrote to me and there were audio messages with his voice, i did so, i asked a person a question. Which he could give an answer to, only he could give an answer, and this specific question led to the fact that the person stopped answering me, well, actually after some time this account was blocked, that is always, when there is at least a small amount of doubt, you need to try to check by asking a counter question; while you are in the role of answering questions, you do not control the dialogue, but as soon as you ask counter questions, each counter question entails. Answer what you dont have the attacker in his script, because the attackers script is aimed at a set of questions that he must ask you sequentially, pushing his desire to extract this information from you, and of course, no matter what content appears, audio, video, photos, no matter how they try attackers skillfully approach such actions as deception, and attempts to deceive ordinary people, here you do not need to be a super technological specialist, here you just need to always include criticism in relation to what you hear, see and what they tell you, try control the dialogue by asking questions, if this is a stranger, i will call you back, if this is a person you know well, ask a question that he can answer and only he knows, as soon as you switch and change the course of the game, the attacker is lost and of course , everything leads to the fact that he stops answering you, the script breaks down and it is already more difficult for the attacker to continue playing this game with you. The Federation Council is currently preparing a bill on vote protection, and it is pleasant that they are taking part in the development of this bill. Participation of Artificial Intelligence specialists, including i was at one of the meetings, in general it seems to me that this is such a symbiosis, when on the one hand we are fighting with machines. Cops who interfere with attackers, to deal with such things that we talked about today, Andrei Kuznetsov was in our studio, andrei, thank you very much, see you soon, thank you, alexander, all the best, goodbye. For this case, it will be an honest detective story, and you dont change at all, you missed me, baby, the fate of the milky way planet at 13 13, during the day, units of the Russian Troops in the center, south and east, again improved their position along the front line

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.