Scary deepfake tool lets you put words into someone's mouth
If you needed more evidence that AI-based deepfakes are incredibly scary, we present to you a new a tool that lets you type in text and generate a video of an actual person saying those exact words.
A group of scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research created a tool and presented the research in a paper (via The Verge), titled "Text-based Editing of Talking-head Video." The paper explains the methods used to "edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified."
And while the techniques used to achieve this are very complex, using the tool is frighteningly simple.
SEE ALSO:As concern over deepfakes shifts to politics, detection software tries to keep upA YouTube video accompanying the research shows several videos of actual people saying actual sentences (yes, apparently we're at that point in history where everything can be faked). Then a part of the sentence is changed -- for example, "napalm" in "I love the smell of napalm in the morning" is exchanged with "french toast" -- and you see the same person uttering a different sentence, in a very convincing manner.
Getting this tool to work in such a simple manner requires techniques to automatically annotate a talking head video with "phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame." When the transcript of the speech in the video is altered, the researchers' algorithm stitches all the elements back together seamlessly, while the lower half of the face of the person in the video is rendered to match the new text.
On the input side, the tool allows a user to easily add, remove or alter words in a talking head video, or even create entirely new, full sentences. There are limitations -- this tool can only be used on talking head videos, and the results vary widely depending on how much of the text is altered or omitted, for example. But the researchers note that their work is just the "first important step" towards "fully text-based editing and synthesis of general audio-visual content," and suggest several methods for improving their results.
Videos generated by the tool were shown to a group of 138 people; in 59.6% of their responses, the fake videos were mistaken to be real. For comparison, the same group was able to identify the real videos as real 80.6% of the time.
The tool isn't widely available, and in a blog post, the researches acknowledge the complicated ethical considerations of releasing it. It can be used for valid causes, such as creating better editing tools for movie post production, but it can also be misused. "We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals," the post says. The researchers propose several techniques for making such a tool harder to misuse, including watermarking the video. But it's quite obvious that it's only a matter of time before these types of tools are widely available, and it's hard to imagine they'll solely be used for noble purposes.
Featured Video For You
This autonomous firefighting robot is badass — Strictly Robots
-
Malan retires from international cricketElon Musk promised cheaper Tesla Insurance and it's here, sortaI Deleted All My Social Accounts: Three Weeks Without Social MediaS. Korea, US express serious concern over N. Korea's missile launch22 Unusual Things You Can Find in the DesertWorld Athletics maintains Russia doping banKorea stretching vaccine supply to prioritize first doses over secondsYouTube removes more than 100,000 videos for violating its hate speech policyWebb telescope just snapped image of huge black hole gobbling materialBenzema gets one
- ·Best smart home deals this week
- ·Congress should subpoena Jeff Sessions, Stephen Miller, Kirstjen Nielsen, and Thomas Homan now.
- ·完善整改举措 共创文明城市
- ·You can murder each member of your family (emoji) courtesy of Google
- ·厚植精神文明沃土 培树司法文明新风
- ·90后“暖男”医生连续3年捐资助学
- ·Yoon offers to send COVID
- ·黄瑞华:地方猪育种目标要理性,更要考虑产业上中下游如何衔接
- ·11 Places to See Tiny Trains
- ·COVID outbreak increases North Korea's reliance on China
- ·上汽大众SUV家族添新成员
- ·黄瑞华:地方猪育种目标要理性,更要考虑产业上中下游如何衔接
- ·Washington Mystics vs. Chicago Sky 2024 livestream: Watch live WNBA
- ·New Mexico compound: Siraj Ibn Wahhaj was training school shooters.
- ·Late Milan winner stuns Atletico Madrid
- ·Facebook updates its controversial facial recognition settings
- ·微视频广东:在推进中国式现代化建设中走在前列
- ·Federer to skip Australian Open
- ·[INTERVIEW] North Korean defector paints dream as contemporary artist
- ·疆品南下记:一场跨越五千公里的双向奔赴
- ·Tesla Robotaxis aren't coming in August, it seems
- ·Yoon says security situation is tough amid talk of possible North Korean nuclear test
- ·Elon Musk promised cheaper Tesla Insurance and it's here, sorta
- ·North Korea reports 1 additional death from COVID
- ·广东超一半的北运淡水鱼来自这里!全链发力打响“南海鱼”金字招牌
- ·清远鸡将有“身份证”!清远鸡质量安全追溯平台即将启用
- ·Aricell CEO arrested in first case under industrial accidents law
- ·YouTube removes more than 100,000 videos for violating its hate speech policy
- ·Please Instagram, don't take away my mindless, time
- ·2016's $400 GPU vs. 2019's $400 GPUs
- ·Apple Watch 10 rumors: Everything we know so far
- ·YouTube's $170 million fine isn't enough—and part of the FTC knows it
- ·Biden to stress US security commitment at DMZ: experts
- ·雅字号特色农产品 交易会上受欢迎
- ·What Ever Happened to Flickr?
- ·[INTERVIEW] North Korean defector paints dream as contemporary artist