Guest Speaker: Kevin Nilsen, President and CEO, ECO Canada
19/08/2024 • 12:00 pm
This webinar focused on GenAI and its implications for Canada’s environmental sector, especially for SMEs. Key points included the exponential growth of AI technology, its potential to enhance productivity, and the importance of AI literacy and training to mitigate risks and maximize benefits. It discussed data privacy, the accuracy of AI-generated content, and the need for new business models to adapt to AI-driven efficiencies.
Kevin:
I'm excited about the opportunities generative AI presents, similar to how the Blackberry phone revolutionized productivity, but on a larger scale. While our sector's adoption has been slow, there's consensus on its potential for efficiency, such as speeding up report writing and database scans. In environmental work, like assessing contamination in downtown Toronto, AI accelerates historical data searches. However, accuracy is crucial, so fact-checking remains essential.
AI is also enhancing presentations. I'm not skilled at creating PowerPoint decks, so I used to hire help. Now, AI tools improve efficiency by helping me convey the right content, feel, and target audience.
Additionally, tools like ChatGPT can summarize large documents quickly, which is valuable in our sector where we often deal with extensive reports. For instance, one company efficiently summarized input from about 50 people using AI in just 10 minutes.
Overall, generative AI offers significant potential for efficiency and innovation in the environmental sector, encouraging new ways of thinking and helping us reconsider overlooked aspects.
Mark:
I'm fascinated by the shift towards industry-specific AI models. We're moving beyond public models like ChatGPT to private ones tailored for sectors like the environment, using curated, validated data. This approach, known as retrieval-augmented generation, ensures results come from approved sources, not random internet data.
The next step is private models for government and other sectors, providing access to all relevant documents for decision-making. This could streamline processes, eliminating the need for large teams to summarize information. Ontario's pilot project with doctors using AI to take notes and integrate them into medical records is a great example, improving efficiency by 40% and allowing more patient interaction.
Regarding AI's impact on jobs, I never believed it would lead to widespread job loss. While some argue everything will be fine, I think we'll face a period of change. It's like renovating a room; it gets worse before it improves. We need to manage this transition carefully, as there will be both winners and losers. In ten years, we can revisit this to see if my perspective was accurate.
Mark:
I share your concerns and wouldn't use a general AI model for critical tasks. However, there are tools designed for specific purposes, like lesson planning, that include guidelines to avoid inaccuracies. These tools can contextualize lesson plans without generating content from language models. For instance, they might add local context to structured plans without altering factual content.
I've seen career coaches use AI like ChatGPT, but I advise caution. As AI becomes more accurate and constrained, we'll see more applications, despite significant energy concerns. It's crucial to understand the risks of using AI tools. For example, AI can help generate ideas for a podcast but shouldn't be relied on for factual content. Some use AI as an antagonist to identify missed points in their work.
In fields like law, AI is outperforming junior roles, raising questions about training future experts. While I see AI's potential, I also recognize the profound social challenges it presents. It's essential to build AI literacy to understand when and how to use it responsibly.
Mark:
When using AI tools for recording, it's essential to get consent from everyone involved. Most tools, especially those integrated with platforms like Zoom, notify participants and request consent. Even if you're using something like Otter on your phone, you should follow the same principle. It's like taking photos at public events where you need photo releases. While I'm not an expert on the legal aspects, obtaining consent is a good principle to follow.
Kevin:
I believe we shouldn't rely on AI to do our jobs for us. A smart approach is to use AI to augment our efforts. Recently, in a new venture I'm involved with, I spent a couple of days defining the business concept, including the problem, solution, customer proposition, and business model. Afterward, I used an AI tool to refine the customer proposition. The tool provided a comprehensive presentation of a potential business plan, including detailed customer segments.
While I wouldn't submit the AI tool's output as-is for something mission-critical, it offered valuable insights I hadn't considered. This allowed me to enhance my pitch deck significantly. The AI tool’s output was about 80% perfect in seconds, but my goal is always 100% perfection. AI can be a powerful tool for improving our work.
For reviewing others' work, I first ensure I understand the original content. Then, I might use a tool to identify potential areas for improvement, applying my own judgment in the process.
Mark:
I recently read a study from BCG and Harvard, which highlights how successful users of AI iterate and augment their work. For example, I wouldn't give an intern a task without guidance, and AI allows for quick iterations. While I don't fully trust AI, it helps me get started by providing a framework to compare against. I use it to identify gaps or new ideas, but I don't rely on it for facts.
Regarding privacy, it's crucial to read the terms and conditions of any AI platform. For open models like ChatGPT, assume that everything you input is used for training. However, some platforms use commercial APIs that constrain data, ensuring it doesn't enter the training model. I've been testing a tool called Box, which is collaborative and private, encrypting data so it's not saved or sent to the training model. It allows switching between different language models and is cost-effective.
There are ongoing issues with data use in training AI, as investigative reports have shown. Many companies are likely preparing for legal battles over IP rights. In this digital age, our constructs around IP law may need reevaluation. When using AI, if the information is public, like on a LinkedIn profile, it's less of a concern. But private information shouldn't be input into open models.
Mark:
For learning more about AI prompting, I recommend taking a one-hour course as a primer. There are many resources available, but be cautious of those from large companies with potential ulterior motives. There are opportunities for AI training through various programs, and it's worth keeping an eye on reputable sources for more information.
Kevin:
There's an important point we haven't addressed: the impact on business models, especially for those billing by the hour. Many consultants earn based on hourly rates. If I, as your consultant, typically bill 100 hours a month, but now deliver the same value in 20 hours due to AI, should I still bill for 100 hours and increase my profit margin, or pass the savings to you by billing only 20 hours? This change challenges how we operate and who benefits. We need to discuss whether fixed pricing is feasible, especially with risky scopes, and how to navigate these changes.
Mark:
Every business model will face challenges due to AI, leading to disruptive changes that will take time to resolve. For example, tools like Perplexity change how content is accessed by summarizing articles instead of directing clicks to the original sources, affecting journalists' revenue. This is just one way AI will fundamentally alter various sectors. We're in the messy middle of this transition, facing both challenges and opportunities.
I am both pessimistic and optimistic. There's a spectrum of applications, but risks exist, like a Southeast Asian webinar where five out of six participants were fake, leading to a $25 million loss due to a fraudulent wire transfer. This highlights potential attack vectors and the need for caution. We need ongoing discussions to ensure AI development benefits society, rather than just big companies. We haven't managed social media's negative impacts well, so we must approach AI with both caution and optimism, taking active roles in shaping its future.
Funded in part by: