Skip to content

Interview with Ron Kerbs, CEO at Kidas: 5 key inquiries addressed

Kidas CEO Ron Kerbs, in conversation with the Center for Data Innovation, shared insights about his company's use of AI to identify and flag harmful or risky communications during online gaming, thereby safeguarding young gamers from emerging dangers. Kerbs elaborated on the strategies employed...

Interview Questions for Ron Kerbs, Head of Kidas:
Interview Questions for Ron Kerbs, Head of Kidas:

Interview with Ron Kerbs, CEO at Kidas: 5 key inquiries addressed

In the ever-evolving world of online gaming, ensuring the safety of young players has become a paramount concern. Enter Kidas' ProtectMe AI system, a groundbreaking tool designed to safeguard young gamers from potential online threats.

The ProtectMe AI system is not your average keyword scanner. It employs a multi-layered AI approach, analysing in-game voice and text chats in real-time to detect potential risks. Beyond simple word analysis, it takes into account the tone, behavioural patterns, context, speech cadence, emotional cues, and in-game situations to accurately assess risk and flag problematic content such as cyberbullying, hate speech, grooming, scams, and exposure to explicit content[1].

This sophisticated system operates discreetly in the background, allowing parents, schools, and esports coaches to intervene promptly without disrupting the gaming experience. However, human oversight remains an essential component of the system. Flagged content is reviewed by a team of analysts to ensure accuracy and contextual understanding, which helps the AI system adapt to evolving language patterns and new forms of harmful communication[1].

Privacy is a priority for the ProtectMe AI system. Raw recordings or full chat transcripts are never stored or shared. The system does not retain any personally identifiable information from the communications[1]. It intelligently adapts to new phrases or behaviours, ensuring it doesn't miss dangerous signals hidden in evolving digital lingo.

In essence, Kidas' ProtectMe AI system continuously monitors live voice and text chat to detect risks in real-time, analyses complex factors such as tone and behaviour, flags dangerous interactions discreetly, and relies on human moderators to review flagged content for improved accuracy and adaptation to evolving language[1]. This hybrid approach effectively safeguards young gamers by combining the speed and scale of AI with the judgment and contextual awareness of human oversight[1].

[1] Source: Kidas' official website and press releases.

  1. The ProtectMe AI system, innovatively employed in online gaming, employs a complex AI approach, analyzing in-game voice and text chats in real-time, taking into account factors like tone, behavioral patterns, context, speech cadence, emotional cues, and in-game situations to detect potential risks.
  2. This system, designed to safeguard young gamers, operates discreetly in the background, allowing for immediate intervention by parents, schools, and esports coaches without disrupting the gaming experience.
  3. The ProtectMe AI system prioritizes privacy and does not store or share raw recordings or full chat transcripts, only intelligently adapting to new phrases or behaviors to ensure it doesn't miss dangerous signals hidden in evolving digital lingo.
  4. By combining the speed and scale of AI with the judgment and contextual awareness of human oversight, Kidas' hybrid approach effectively safeguards young gamers in their education and self-development, promoting personal growth and ensuring a safe online environment for all.

Read also:

    Latest