AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages! You are not logged in. Login here for full access privileges. |
Previous Message | Next Message | Back to Slashdot <-- <--- | Return to Home Page |
|
||||||
From | To | Subject | Date/Time | |||
![]() |
VRSS | All | Cornell Researchers Develop Invisible Light-Based Watermark To D |
August 12, 2025 8:40 PM |
||
Feed: Slashdot Feed Link: https://slashdot.org/ --- Title: Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes Link: https://slashdot.org/story/25/08/12/2214243/c... Cornell University researchers have developed an "invisible" light-based watermarking system that embeds unique codes into the physical light that illuminates the subject during recording, allowing any camera to capture authentication data without special hardware. By comparing these coded light patterns against recorded footage, analysts can spot deepfake manipulations, offering a more resilient verification method than traditional file-based watermarks. TechSpot reports: Programmable light sources such as computer monitors, studio lighting, or certain LED fixtures can be embedded with coded brightness patterns using software alone. Standard non-programmable lamps can be adapted by fitting them with a compact chip -- roughly the size of a postage stamp -- that subtly fluctuates light intensity according to a secret code. The embedded code consists of tiny variations in lighting frequency and brightness that are imperceptible to the naked eye. Michael explained that these fluctuations are designed based on human visual perception research. Each light's unique code effectively produces a low-resolution, time-stamped record of the scene under slightly different lighting conditions. [Abe Davis, an assistant professor] refers to these as code videos. "When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos," Davis said. "And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." By comparing the coded patterns against the suspect footage, analysts can detect missing sequences, inserted objects, or altered scenes. For example, content removed from an interview would appear as visual gaps in the recovered code video, while fabricated elements would often show up as solid black areas. The researchers have demonstrated the use of up to three independent lighting codes within the same scene. This layering increases the complexity of the watermark and raises the difficulty for potential forgers, who would have to replicate multiple synchronized code videos that all match the visible footage. The concept is called noise-coded illumination and was presented on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia. Read more of this story at Slashdot. --- VRSS v2.1.180528 |
||||||
|
Previous Message | Next Message | Back to Slashdot <-- <--- | Return to Home Page |
![]() Execution Time: 0.0141 seconds If you experience any problems with this website or need help, contact the webmaster. VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved. Virtual Advanced Copyright © 1995-1997 Roland De Graaf. |