1. Home
  2. Program
  3. Keynote Speakers

Keynote Speakers

Keynote Speech #1 : Tuesday December 13th

Eric Krupicka

Title: New Challenges in Image and Video Forensics
Summary: The amazing speed of developments in camera and image processing technology during the last years has shown a great influence on our forensic casework.
Well-established methods in image and video forensics, like the Photo Response Non-Uniformity (PRNU) analysis or the detection of defect pixels, which have been used for many years now became challenged by new ways of image processing and manipulation.
In this presentation we will highlight some aspects of how modern camera and image technology changes the way how to use forensic tools and which new tools have to be developed.
The global success of smartphones has set "computational photography" to be a new de-facto image processing standard, leading to manifold complications in forensic casework.
Artificial intelligence, despite all of its benefits, is posing a still growing challenge for "classical" forensic methods used for the detection of image and video manipulations, deepfakes or morphed images.
Last-but-not-least the widespread use of social media platforms like YouTube, Instagram or TikTok is also heavily affecting our casework. Especially the small image and video dimensions and the sometimes aggressive data compression of multimedia content requires extra attention and efforts when working on criminal cases.
Short Biography: Erik Krupicka received his Ph.D. in crystallography at the university of Ulm in 2002. After working for several years in the x-ray lab of the Forensic Science Institute of the Federal Criminal Police Office in Germany (a.k.a. the "Bundeskriminalamt", BKA) he transferred to a newly founded unit for cryptanalysis and password recovery. Since 2020 he is working as a forensic expert in the unit of image, audio, and video forensics.
Besides forensic caseworks, he is engaged in various research activities to further develop the forensic tools needed to improve the scope and capabilities of image and video forensics. His current research interests are focused on the forensic application of electrical network frequency, steganography and image and video codecs.

Keynote Speech #2 : Wednesday December 14th

David Luebke

Frontiers of Neural Media Synthesis
Summary: Modern AI—in the form of neural networks, truly massive datasets, and enormous computational horsepower—has enabled a “Cambrian Explosion” of new techniques for synthesizing images, video, and audio. Tasks that would have taken a talented artist hours or days can now be done in seconds or even milliseconds; tasks that would have taken an entire Hollywood studio weeks or months can be done in hours or even minutes. With this incredible progress comes a fresh urgency for research on topics in information security, media forensics, and related topics. I’ll discuss some of the latest research on media synthesis from NVIDIA, highlight an older project (the StyleGAN3 Detector Challenge) as a case study in responsible rollout of AI-powered media synthesis, and close with some observations on current trends and a call to action on near-future use cases that I believe need more attention.
Short Biography: David Luebke helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. Luebke runs a research group focused on computer graphics, neural image synthesis, His principal research interests are computer graphics, generative neural networks, and virtual reality. Luebke is a Fellow of the IEEE and a recent inductee into the IEEE VR Academy; other honors include the NVIDIA Distinguished Inventor award, the IEEE VR Technical Achievement Award, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award". Together with his colleagues, Dr. Luebke has authored a book, a major museum exhibit, and over two hundred papers, articles, chapters, and patents.

Keynote Speech #3 : Thursday December 15th

Julia Hirschberg

Detecting Intent in Misinformation and Disinformation in Social Media
Summary: Misinformation and disinformation in social media can both spread false information.  However, while misinformation describes false information spread regardless of intent to deceive, disinformation represents false information spread with intent to deceive.  The term disinformation was first recorded in 1965-70 and is a translation of the Russian word dezinformatsiya which was based on the French desinformer (“to misinform”).  Disinformation can be used to spread propaganda used by dictatorships to convince their citizens that their country is good and others are evil.  Most recently it is also being used to spread false information about COVID-19, climate change, and the Russia-Ukraine conflict.  We have been collecting Twitter posts focused on these three topics and identifying the intent behind such tweets:  is the information accurate or inaccurate?  If inaccurate, what is the intent behind the post?  Is it malicious or not?  If malicious, is the intent to polarize a population, to call people to action, to discredit certain entities (countries or politicians) to go viral?  If not malicious, is it intended to be humorous or sarcastic or is it simply ignorant?  We will describe our data collection, the models we have developed to identify malicious vs benign, and results on identifying intent behind the Twitter posts, as well as the tweet and tweeter graphs we are creating to identify the way these posts are being spread.
Short Biography: Julia Hirschberg is the Percy K. and Vida L. W. Hudson Professor of Computer Science at Columbia University. Her research focuses on prosody, spoken dialogue systems, and emotional and deceptive speech. She received her Ph.D. in Computer Science from the University of Pennsylvania in 1985.  From 1985 to 2003 she worked at Bell Labs/AT&T Labs as a Member of Technical Staff and as a Department Head, creating the Human-Computer Interface Research Department. She was president of the International Speech Communication Association (2005–2007), and edited Speech Communication (2003–2006) and Computational Linguistics (1993–2003). She is an AAAI fellow.

Keynote Speech #4 : Friday December 16th

Mauro Barni

Title: Adversarial examples: unavoidable threat or scarecrow?
Outline: Since the existence of adversarial examples has been observed for the first time, a vast amount of research has been dedicated to understand the origin of the weakness of deep learning models against properly crafted input samples, and to devise suitable remedies. After almost a decade, we now know that adversarial examples ubiquitously affect every kind of deep learning models, regardless of their architecture and the task they intend to solve. However, the ultimate reason for the existence of adversarial examples is not well understood yet. A great effort has also been paid to develop possible defenses, most of which turned out to be defeatable with a slight modification of the algorithm used to build the adversarial examples. On the other hand, the life of attackers is not as easy as one may think, since exploiting adversarial examples in real life applications is all but straightforward. The goal of this speech is to outline a possible explanation as to why adversarial examples are so easy to craft in the case of binary decision networks, and to highlight the difficulties that attackers must face with to apply adversarial examples in a real-life setting.
Short biography: Mauro Barni is a full professor at the Department of Information Engineering and Mathematics, University of Siena, Italy. His research interests include digital image processing and information security, with particular reference to the application of image processing techniques to copyright protection (digital watermarking) and authentication of multimedia (multimedia forensics). Mauro Barni received his Ph.D. degree in informatics and telecommunications from University of Florence, Italy. He is a Fellow of IEEE.