28/06/2024 • 10:00 am
This webinar was curated for the internal team of leaders of a Toronto-based organization in the employment services sector. The session focused on enhancing participants’ understanding of generative AI tools and use cases, and provided tangible lessons and skills training.
Email us at magnetai@magnetnetwork.ca if you would like to have a MagnetAI session for your internal team.
In Nov 2022, at a tech conference, a senior director and I were at a talk conference in Lisbon. We took those ideas and challenged our senior leadership team to stay ahead of this important trend. For my ChatGPT, we fed it our strategy documents (that weren’t of concern being in the training data), to customize it to your style of voice. With more training, it starts writing more like me. I ask it how to frame what we’re doing internally and it’s helpful to speed up a lot of the work.
It’s a process of working, customizing, and training. Like when you’re working with a human employee, it’s an interactive process to improve answers.
In a session I did for newcomers, they shared they were having a hard time trying to ask good questions in English. The great thing about these Large Language Models is that you can ask in any language and then translate it. This can be a great benefit for progressive employees to become more productive when properly trained.
Even as someone is learning a language, there are a lot of tools - even older language training tools - that improve translation and speed up an internal team significantly.
That is still an existing concern and a lot of LLM companies are trying to address it. There are even public examples of potentially overcrowding it, such as with generative image creation.
Understanding how LLMs are trained is important. Knowing how to prompt them can significantly reduce biases - you can ask it to have a certain perspective. The best solution is to use the more verticalized tools that have content from which the LLM is constrained to draw. This is called retrieval augmented generation. Understanding the tool and the knowledge base it’s pulling from is the most important piece - you can have it create content and translate knowledge from content that has been approved. This constrains its knowledge to draw from specific content.
Being scared or ignoring this is not the solution, instead learning and understanding is. Implementing GenAI to remove mundane tasks, like administrative tasks, that are not adding value to focus on facilitating connection and communication. There’s pressure from organizations, funders, governments, etc. to look at only the efficiency - like how to scale or automate career services, counselling and support. But, this is never going to replace a person, despite the situations that AI can be perceived as more empathetic.
Use the tools appropriately, but save time to work with people and help them through challenging situations.
Yes, it's pulling from one of the largest foundational LLMs. Anything you put into CoPilot or a similar tool will probably go into its training data.
Prompting is really important. You can ask GPT to take on a perspective as an EDI expert to set the context. It constrains it, even from large training data. Recent studies show that the LLMs can score high on Theory of Mind tests, meaning it can understand other people’s perspectives. It can even perform better than some humans on Theory of Mind tests.
This shows they have a perspective - and you can ask them to have any kind of perspective.
There has been a significant increase in the vectors of attacks. It’s not just the current cybersecurity issues and data breaches, but an increase in the vectors of attacks. For example, on World News, there was a case of a senior executive supposedly calling an emergency meeting one night to complete an international wire transfer for 30 million dollars. The senior executives on that Zoom call were synthesized. It was not a technical hack but using voice and video tech.
This is also a concern for scams on elderly people. AI technology being able to mimic individuals can have huge implications for elections. It’s especially important to do your due diligence around election time.
For a company that works on technical and background platforms, it’s important to be aware of the policies. There’s no current bulletproof way to protect against any kind of platform right now but ensure your platforms are approved in your organization against leading-edge cybersecurity and data safety protocols.
There are several areas where scraping can be, such as scraping publicly posted jobs to do a labour market analysis or aggregate jobs into a job feed. That’s tech that’s been around for a while.
The scraping I’m currently talking about with Open AI is that there are bots that go around websites and pull all the information from the internet that they can. It copies information from where it’s from, who is attributed to LinkedIn profiles, etc. There are policies that organizations are supposed to abide by. Safety and security can block certain scraping tech from parts of their website, but it’s going to try to search every website and all available content on the internet.
There are many lawsuits about whether information was obtained appropriately to train LLMs. For example, OpenAI was found scraping YouTube’s content. Google never complained about it because they were doing it themselves, even though it’s against their privacy policy. There’s going to be fights and lawsuits later because they have billions of dollars. They’re in desperate need of data right now even through outstripping.
Funded in part by: