World Artificial Intelligence Cannes Festival 2024

HPA
14 min readFeb 14, 2024

--

Introduction

Between February 8th and 10th, 2024, HPA took part in the prestigious World Artificial Intelligence Cannes Festival (WAICF24), a gathering that showcases the forefront of AI innovation and its intersection with global industries. Here is a compilation of reflections and highlights amidst the festival’s key moments, where thought leaders, innovators, and enthusiasts converge to explore the future of artificial intelligence in a setting renowned for its blend of glamour and intellectual discourse.

Interactions between research and business fields

In the rapidly evolving landscape of artificial intelligence, the WAICF served as a pivotal platform for experts from diverse backgrounds to share insights and strategies on leveraging AI for real-world impact. The discussions at the forum highlighted various approaches and considerations essential for translating research into tangible outcomes and fostering innovation, especially in the context of large organisations and the broader societal implications of AI advancements.

Real-World Impact from Research

Adam Cheyer, co-founder of Siri, shared insights from his experience at Apple, stressing the tricky balance between research and product development. He advocated for a mixed approach to research, incorporating short, medium, and long-term goals, and the importance of timing the market release of products to maximise impact.

The Evolution and Impact of Large Language Models

Adam Cheyer then expressed his astonishment at the rapid advancements in LLMs over the past year, highlighting their potential to comprehend and reason with the world’s knowledge in unprecedented ways. He noted the shift towards AI tools that not only know but also do, suggesting the next developmental phase should integrate these aspects more closely.

Regulatory Considerations and the AI Act

Opinion 1: The importance of cautious progress in AI development was stressed, with a specific focus on ensuring the validity, robustness, and security of AI systems in alignment with regulatory frameworks like the AI Act. This emphasis on regulation underscores the need for the AI community to navigate ethical and safety considerations meticulously as the technology continues to integrate into various aspects of society.

Is AI going to become generally intelligent?

The WAICF 2024 brought together esteemed experts to delve into the complex interplay between artificial intelligence, emotion detection, consciousness, and their implications for society and industry. The panel discussion named “Is AI going to become generally intelligent?” underscored the nuanced challenges and opportunities in advancing AI technologies, particularly in understanding and integrating emotional and cognitive dimensions into intelligent systems.

The Role of Emotions in AI

Claude Frasson, Honorary Professor in Computer Science at the University of Montreal, highlighted the significance of emotions in achieving truly intelligent machines, drawing attention to the brain’s amygdala and hippocampus roles in emotion generation and association. He introduced the concept of “pseudo-emotions” in computers, which mimic real emotions but lack genuine emotional experiences.

Pascale Fung, Director of Caire, contested the notion of pseudo-emotions being the limit of AI’s emotional capabilities, suggesting that current benchmarks demonstrate AI’s ability to recognize and manifest emotions. However, she acknowledged the limitations of AI learning predominantly from textual data, contrasting it with human and animal learning processes.

Consciousness and Intelligence in AI

Axel Cleeremans argued that consciousness and emotions are distinct from intelligence, using AlphaGo’s example to illustrate that AI can exhibit high intelligence without consciousness or emotional awareness. He posited that while current AI systems can mimic human-like features, they lack genuine world understanding and consciousness.

The debate between Cleeremans and Fung explored the possibility and desirability of creating conscious machines, with Fung emphasising empirical evidence and the potential for future AI to be biologically inspired. Cleeremans raised ethical and existential concerns about autonomous, conscious machines surpassing human capabilities.

AI’s Impact on Society and Industry

Claude Frasson discussed the targeted use of emotion detection in AI, particularly in Japan’s development of emotional robots for senior care. These robots simulate emotions to elicit genuine emotional responses from humans, illustrating the potential for AI to provide companionship and empathy.

Patrick Johnson highlighted the transformative effects of AI in empowering and diversifying industries. He noted the changing job profiles, with engineers evolving into managers and the use of AI, like GPT, to facilitate rapid entry into new fields.

Challenges in Understanding AI’s Decision-Making

Patrick Johnson emphasised the need to demystify AI’s “black box” nature, especially concerning LLMs. He advocated for exploring and understanding the latent spaces within AI models to ensure transparency and accountability in AI’s decision-making processes.

The discourse at WAICF 2024 shed light on the critical need for a multidisciplinary approach to AI development, blending neuroscience, ethics, and technology to forge intelligent systems that are both advanced and aligned with human values and well-being.

AI for the Greater Good of Society

The “AI for the Greater Good of Society” speech at the conference brought together a diverse group of experts who shared their insights on how artificial intelligence can be harnessed to address some of the most pressing challenges facing humanity and the planet. Each speaker brought a unique perspective, informed by their professional background and area of expertise, to illustrate the transformative potential of AI across various sectors.

AI in Geopolitical Problem-Solving

Max Murphy, a data scientist at Overwatch Data, emphasised the critical role of AI in addressing complex geopolitical issues. His insights highlighted how data analysis and predictive modelling could offer novel solutions to long-standing international conflicts and cooperation challenges.

AI in Life Sciences and Biotech

Romain Forestier, Tech and Scientific Research Director at Jedi, pointed out Europe’s lag in Life Sciences compared to China and the US. He underlined the significant potential for AI in revolutionising fields such as ocean microbiome research, drug discovery, antimicrobial resistance databases, and genomics, particularly in developing new cancer treatments and focusing on rare diseases.

AI in Conservation Efforts

Constanza Gomez Mont, founder and principal of C Minds and co-founder of AI for Climate and NaturaTech LAC, showcased how AI systems are being utilised to monitor and conserve wildlife, such as jaguars and manatees. By analysing movement patterns, behaviours, and using sensors and microphones to identify species, AI contributes to developing effective conservation strategies.

AI in Healthcare in Africa

Nyasha Samhembere, founder of Feels, highlighted the transformative impact AI could have on healthcare in Africa, where there is a dire shortage of medical professionals and access to healthcare. He discussed the potential of AI in diagnostics, tailored healthcare solutions, drug development, and health education. However, he also cautioned about challenges such as data security, biases in training data, cultural sensitivities, and the need for explainable AI models to build trust and acceptance.

AI in Energy and Infrastructure

Erik Asberg, CEO of Esmart Systems, spoke about the critical importance of maintaining and understanding the condition of the power grid, which is essential for supporting renewable energy and ensuring the reliability of energy supply. He introduced the concept of “collaborative AI”, where AI and drones are used in conjunction with human inspectors to enhance the efficiency and effectiveness of power grid maintenance, marking a shift from traditional, labour-intensive methods to more innovative, tech-driven approaches.

Conclusions

These discussions at the conference underscored the vast potential of AI to contribute positively to society across various domains, from environmental conservation and healthcare to energy and life sciences. However, the speakers also highlighted the need to navigate the ethical, security, and cultural challenges associated with AI deployment to maximise its benefits for the greater good.

Should we slow down research on AI?

The debate brought together prominent figures in the AI field to discuss the pace of AI development and the implications for society. Each participant brought valuable perspectives based on their extensive experience and contributions to the field.

Overview of the Debate on Slowing Down AI Research

Nick Bostrom, Professor at Oxford University, known for his work on existential risk and the future of artificial intelligence, argued for the desirability of having the option to slow down or pause AI development if it begins to outpace our ability to control it. He cautioned against prematurely blocking new AI capabilities but also highlighted the importance of proceeding with caution to avoid potential catastrophic outcomes.

Yann LeCun, Chief AI Scientist at Facebook, advocated strongly against slowing down AI research, emphasising the distinction between research and deployment. He dismissed fears of AI taking over the world as unfounded and highlighted the significant distance AI has to cover before reaching the intelligence level of even a cat.

Francesca Rossi, IBM’s Ethics Global Leader, argued against slowing down AI research, distinguishing between research aimed at developing powerful models and research focused on addressing current AI issues such as explainability and biases. She emphasised the importance of AI research in solving existing problems and preparing for future challenges.

Mark Brakel, director of Policy at the Future of Life Institute, known for his work on AI policy and safety. Brakel argued in favour of slowing down certain aspects of AI development, particularly those that could lead to misuse or loss of control. He drew parallels with societal decisions to regulate or slow down other technologies, such as human cloning, and advocated for redirecting research towards safer and more responsible avenues.

Safeguards and Regulations for AI Development

The debate also touched on the need for safeguards and regulations to guide AI development responsibly:

  • Bostrom highlighted the potential for AI to pose existential risks and the challenge of regulating AI after deployment. He also raised ethical considerations regarding the treatment of potentially conscious AI entities.
  • Rossi expressed scepticism about AI posing a “systemic risk” and favoured focusing on real, immediate challenges rather than hypothetical existential threats.
  • LeCun dismissed the notion of significant existential risks from AI as unrealistic, advocating instead for a focus on ensuring diversity and freedom in AI development to prevent cultural or corporate monopolies.
  • Brakel supported the idea of more regulations, particularly to address specific high-risk applications of AI, and suggested a collaborative international approach to AI governance, akin to CERN for particle physics.

Conclusions

The debate underscored the diverse perspectives within the AI community on the pace of AI development and the need for regulation. While there was consensus on the importance of AI research in solving existing and future problems, opinions diverged on the potential risks and the best ways to mitigate them. The discussion highlighted the complexity of balancing innovation with safety and ethics in the rapidly evolving field of AI.

How could we solve interoperability issues with AI and LLM?

The speech, held by Robin Röhm, delves into the critical challenges and potential solutions related to data interoperability in the realm of artificial intelligence, particularly in the context of Large Language Models (LLMs) and their integration into various sectors, including healthcare.

Challenges in Data Interoperability

  • Data Abundance and Unstructured Formats
    The AI field is characterised by vast amounts of data, often unstructured, making it difficult for machines to interpret and use efficiently.
  • Privacy Concerns
    As AI adoption grows in enterprises, privacy issues become a significant barrier, especially when dealing with sensitive or personal information.
  • Regulatory Requirements
    In sectors like healthcare, AI usage is subject to stringent requirements, including ethical use, transparency, explainability, human control, accountability, privacy, security, safety by design, fairness, and bias minimization.

Proposed Solutions for Enhancing Interoperability

1. Model Deployment Strategy

a. The suggestion to “move the models to the data, not the other way around” introduces a paradigm where AI models are deployed and trained on-premise or close to where the data resides. This approach addresses privacy concerns by reducing data movement and enhancing data security.

b. Federated learning models are highlighted as a beneficial framework in this context. In federated learning, models are trained across multiple decentralised devices or servers holding local data samples, without exchanging them. This method supports privacy, security, and data sovereignty.

2. Achieving High-Quality, Interoperable Data

a. The process involves cleaning and enriching raw data before mapping it to a common data model. This approach ensures that data from various sources becomes interoperable, facilitating its use across different AI models and applications.

b. High-quality, interoperable data is key to addressing regulatory requirements and ensuring that AI applications are ethical, transparent, and fair.

3. Hybrid Workflows

a. The speech emphasises the importance of hybrid workflows in driving enterprise AI adoption. Rather than relying solely on LLMs for applications, it suggests using a combination of techniques. This approach includes leveraging simpler, more explainable models alongside LLMs to ensure that AI applications are not only powerful but also understandable and accountable.

Conclusion

Solving interoperability issues with AI and LLMs requires a multifaceted approach that addresses data privacy, regulatory compliance, and the technical challenges of working with unstructured data. By deploying models closer to the data source, standardising data into interoperable formats, and adopting hybrid workflows that combine the strengths of various AI techniques, it’s possible to create AI applications that are both effective and aligned with ethical standards. This strategy ensures that AI can be safely and effectively integrated into critical sectors like healthcare, where the stakes for privacy, security, and accuracy are particularly high.

The New Facets of Cybersecurity in the Age of Artificial Intelligence

Esma Aimeur, who leads the AI for Cybersecurity Lab at the University of Montreal, delves into the evolving landscape of cybersecurity in the context of rapid advancements in artificial intelligence. Aimeur’s insights shed light on the dual nature of AI as both a tool for enhancing security measures and a potential vector for sophisticated cyber threats.

Evolution of Cyber Threats

Aimeur outlines the progression from computer crime to cybercrime, and now to AI-driven crime, highlighting the rise of sophisticated scams like SMS phishing, the use of tools like PassGAN for password cracking, fake review attacks, and privacy invasions.

AI as a Double-Edged Sword

The speech emphasises how AI technologies that enable personalization can also be exploited to invade privacy, stressing the importance of education in recognizing fake news and images as crucial defences in the digital age.

The Challenge of Disinformation

Aimeur discusses the systematic review of fake news and misinformation, particularly focusing on the creation and proliferation of deepfakes and the ease with which realistic yet entirely fabricated content can be generated, such as videos translated from one language to another.

The Role of Large Language Models (LLMs)

The potential of LLMs, exemplified by ChatGPT, is highlighted both in terms of their positive applications and the vulnerabilities they introduce, including the risk of data breaches, as evidenced by an incident in June 2023 where subscribers’ prompts were leaked.

Exploiting AI Systems

Aimeur points out how attackers use techniques like jailbreaking and reverse psychology to bypass AI systems’ safeguards, thereby exploiting these systems to carry out illicit activities or escaping model limitations.

Risks and Challenges in AI-Powered Cybersecurity

The speech identifies several AI-driven cybersecurity threats, including the creation of sophisticated malware, the ubiquity of deep fakes, voice cloning technologies like ParrotAI, image classification attacks that deceive AI systems (e.g., making an autonomous vehicle stop by projecting a phantom object), poisoning attacks that compromise training data, and the use of transfer learning to create fraudulent AI chatbots targeting bank customers.

Positive Applications of AI in Cybersecurity

Despite the challenges, Aimeur emphasises the potential of generative AI (GenAI) to bolster cybersecurity efforts for the “good guys”, underscoring the need for optimism and the development of strong AI-powered defences to ensure a secure digital future.

Generative AI: Myth and Reality

Luc Julia, Scientific Director at Renault, presents a grounded perspective on the current state and future directions of Generative Artificial Intelligence (GenAI), dispelling common misconceptions and highlighting the practical aspects and challenges of integrating AI into various domains.

Key Insights:

Understanding AI

Luc Julia clarifies that AI is not a monolithic entity emerging from science fiction but a scientific field with a rich history, emphasising that AI encompasses multiple technologies and has been evolving over many years.

GenAI: Evolution, Not Revolution

The advancements in GenAI are characterised as an evolution rather than a revolution. The speaker points out that the true transformation lies in the accumulation of data and computational power, leading to increased energy consumption. However, the real shift is in the democratisation of AI through intuitive interfaces like natural language prompts, making AI accessible to a broader audience.

Creativity and AI

The notion that GenAI should not be confused with “CreativeAI” is stressed, with Luc noting that while AI can generate content, the creative impulse remains distinctly human. Tools like stable diffusion act as aids to human creativity, much like Photoshop, rather than replacements for human ingenuity.

AI as a Toolbox

AI is likened to a toolbox filled with various instruments designed to assist and enhance human capabilities, rather than an autonomous entity capable of independent thought or action.

Challenges: Hallucinations, Mistakes and Finetuning as a Solution

The speaker underscores the importance of being educated about AI’s limitations, such as the propensity for “hallucinations” or generating factually incorrect content, as demonstrated by a lawyer who used GPT to draft a legal brief filled with non-existent cases.

Fine-tuning AI models with trustworthy data is recommended to enhance specificity and accuracy, ensuring that AI outputs are more reliable and relevant to specific needs.

Intellectual Property Concerns

The issue of intellectual property rights in the context of GenAI is highlighted, raising questions about data ownership and copyright infringement.

Jailbreaking and Ethical Considerations

The potential for “jailbreaking” AI, akin to unlocking iPhones, to bypass restrictions and prompt AI for unethical or dangerous information, such as bomb-making recipes, is discussed, emphasising the need for ethical guidelines and safeguards.

Future Directions for GenAI

The speech concludes with a call for the development of more user-friendly fine-tuning processes, open-source data and algorithms, frugal models to address sustainability concerns, and the integration of GenAI with traditional techniques to create hybrid solutions that leverage the best of AI without over-relying on it.

Conclusions

The speech provides a comprehensive overview of GenAI’s current state, debunking common myths and highlighting the technology’s role as a tool for enhancing human creativity and efficiency. By acknowledging the challenges and ethical considerations associated with GenAI, Luc Julia advocates for a balanced approach that combines the power of AI with human oversight and creativity to navigate the future of artificial intelligence responsibly.

Conclusions

In conclusion, the World Artificial Intelligence Cannes Festival 2024 served as a vibrant melting pot of ideas, innovations, and critical discussions that shape the trajectory of artificial intelligence. Across various speeches and debates, a common theme emerged: the dual nature of AI as both a catalyst for unprecedented advancements and a source of new challenges. From the ethical dilemmas posed by potential AI consciousness to the practical concerns of data interoperability and cybersecurity, the festival underscored the multifaceted impact of AI on society.

As we stand on the cusp of significant AI-driven transformations, it’s clear that a collaborative, multidisciplinary approach is essential for harnessing the full potential of AI while mitigating its risks. The insights gathered from WAICF24 highlight the importance of continuing dialogue between technologists, policymakers, ethicists, and end-users to ensure that AI development is guided by a balanced perspective that prioritises human welfare, ethical considerations, and sustainability.

The journey of AI is far from a straightforward path; it’s a complex, evolving narrative that requires careful stewardship. As we navigate this terrain, the lessons from WAICF24 remind us of the power of human creativity, the importance of ethical foresight, and the need for inclusivity in shaping an AI-enhanced future. In this dynamic landscape, our collective wisdom, ethics, and innovative spirit will be our most valuable assets in realising the promise of AI for the greater good of society.

--

--

HPA

High Performance Analytics account. Follow us to keep informed on #AI #ML#DataScience & more