Guest Speaker: Dr. Jeff Schwartzentruber, Senior Advisor for AI and ML, Rogers Cybersecure Catalyst
05/06/2024 • 12:00 pm
This webinar focused on the evolution and adoption of generative AI, particularly its implications for small businesses, while addressing concerns and highlighting the need for responsible implementation. It discussed the emotional responses to AI, explored the impact of generative AI on both the workforce and various industries, covered training and tools tailored for small businesses, addressed security and ethical concerns, discussed verticalization, and examined practical applications through audience interaction.
Small businesses are beginning to explore the benefits of generative AI tools. While initial use may lack direction, customizing these tools to fit specific business needs can unlock significant productivity gains. Empowering business owners to tailor AI tools to their domains enhances processes and addresses pain points. Multi-modal capabilities offer versatility across various skill sets and business needs. Despite excitement about productivity improvements, challenges remain as businesses encounter more complex problems. Ongoing discussions and iterative exploration can lead to innovative solutions and operational advancement.
From a security perspective, there are different levels to consider, particularly in terms of governance, risk, and compliance (GRC). A new approach involves appointing individual data governors for each department to tailor security measures according to their specific needs, ensuring a more effective fit for different organization sizes and structures. Additionally, monitoring employee activity with AI tools is crucial for identifying and addressing security vulnerabilities. Operationalizing this data allows for real-time insights into usage patterns, enabling proactive measures to safeguard sensitive information. By centralizing data streams and applying role-based access controls, organizations can effectively enforce security measures within their data ecosystem.
In addressing data loss prevention (DLP) and safeguarding internal IP while using AI tools, small businesses face challenges due to limited resources. However, it's crucial to ensure data protection. For instance, when utilizing OpenAI, data privacy concerns arise, but the API deletes data after 30 days. Yet, small enterprises may lack awareness of how their data contributes to model training, highlighting the need for cyber education. While proactive security tools are scarce, implementing controls and policies is essential. Leveraging open-source projects and tools like Private AI and Portkey can help scrub prompts and identify personally identifiable information (PII). Despite the broad scope of AI regulations, having any form of control or policy is beneficial, especially amid evolving regulatory landscapes.
Organizations are addressing cybersecurity risks in adopting generative AI by prioritizing awareness and vulnerability discussions. Enumerating services and conducting thorough vendor analyses upfront are crucial steps. Monitoring these systems is essential, indicating an understanding of security concerns. Reference checks and tools like the Okay guide can aid in following best practices. Emphasizing transparency, explainability, and monitoring, especially for critical infrastructure AI, is key. Investors are increasingly scrutinizing cybersecurity measures, impacting startup funding rounds. Therefore, building a strong foundation from the outset is imperative for organizations.
Addressing misinformation and deepfakes created with generative AI poses significant challenges due to the extensive data scraping and processing required. The question arises whether misinformation should fall under cybersecurity's purview within organizations. It's a complex issue, with potential massive impacts and minimal data requirements. While there's no technological solution yet, organizations should develop policies and action plans to combat misinformation. This involves PR and internal governance discussions, although a comprehensive framework is still lacking from a GRC perspective.
Funded in part by: