How Will Generative AI Transform EHR Training?

Aired on: Thu, Sep 28, 2023

314e at HLS Symposium 2022 | 314e Corporation

Ryan Seratt, Director of Training and Development at 314e, will guide you through the boundless potential of Generative AI in EHR Training. From crafting tip sheets in mere minutes to experiencing AI tools firsthand, this webinar offers a comprehensive exploration of it all.

Key Takeaways From the Webinar

  • Learn how generative AI fits into EHR training
  • Master handling AI copyright concerns
  • Witness capabilities of AI tools in action
  • Speed up tip sheet and micro-video creation
  • Experience more possibilities with AI
Ryan Seratt

Speaker: Ryan Seratt
Director of Training And Development at 314e

With expertise ranging from software to leadership, Ryan is a true trailblazer in the field. As an AI pioneer, he’s at the forefront of crafting innovative solutions for EHR training, upgrades, and beyond. His LinkedIn insights offer actionable tips in areas such as microlearning and ongoing training, while in the realm of e-learning, Ryan sets new standards for effective training experiences.

Here is the transcript of the webinar

Casey Post 0:02
All right, well, we're going to go ahead and get started. Good morning and good afternoon, everyone. Thank you all for joining us today for this exciting webinar. My name is Casey Post, and I am delighted to have the honor of introducing today's speaker, Ryan Seratt, who will be sharing How Generative AI Will Transform EHR Training. So before I introduce Ryan, I'll take just a few minutes of some housekeeping items, and we'll provide some background on 314e. So, today's call is being recorded. Everyone's mics are muted, and cameras are off. If you do have questions, please go ahead and put those in the chat, and our colleague Amanda Gustafson will be ensuring that those answers that we get to those questions. The slides and the link to the recording will be available after the session, and those will be sent to you. Okay, let me take the next minute or so to share some background on 314e. 314e was established in 2004 and is headquartered in Silicon Valley with a clear objective and a clear focus on healthcare IT. And over the years, our passionate team has dedicated itself to creating innovative solutions that drive positive change within the healthcare industry. We understand the unique challenges faced by healthcare organizations and have tailored our products and services to meet their specific needs. We take pride in our diverse range of products designed to address critical healthcare challenges. Our solutions encompass a data archival system, a document management system, a content management system, a digital adoption platform, a health data analytics platform, and an interface engine. Each product is carefully crafted to enhance efficiency, streamline workflows, and improve patient outcomes. In addition to our product suite, we offer a comprehensive range of services that cater to various aspects of healthcare IT, from EHR implementations to training and e-learning, interoperability, analytics, revenue cycle management, and IT services. Our team of experts is well-equipped to guide healthcare organizations toward success in this rapidly evolving digital era. And with over a 20-year history, our solutions have benefited many of the country's largest health systems, academic medical centers, county and community hospitals, as well as critical access hospitals and independent physician groups. And our affiliations and partnerships allow us to stay current with industry trends and provide strategic advantages for our customers. And without further ado, let me introduce today's speaker, Ryan Seratt. Ryan is 314e's Director of Training and Digital Development. He is a digital-age learning leader with a passion for creating highly effective training programs using a combination of the right technology, modern learning methods, and coaching to help learners thrive in today's demanding work environments. He has over nine years of healthcare EHR experience, creating software training programs and leading digital change for organizations. Ryan, off to you.

Ryan Seratt 03:12
Thanks, Casey. So hello, everyone. Thank you for joining this morning or evening, depending on where you're calling in from. Can you hear me, okay? In the chat window, just give me a yes, give me a hello. Just type in. All right. Good to see some interaction. So today, as we go through the presentation, as questions come up, feel free to type those into the chat area. We'll be tracking those, and then we'll do a Q&A at the end of the presentation. I know there's going to be a lot of good questions today. Very excited to talk about those and hear what your questions are, and hopefully give you some insight into the topic that we're discussing today, which is AI in EHR training.

So, let's go ahead and jump in. So what we'll be talking about is we'll be talking a little bit about generative AI and what is generative AI? So we'll talk a little bit about it, and then we'll examine some concerns. We're going to go right into the concerns today. We know that's a hot topic. And then, we'll see some examples of AI tools used in live action. We're going to actually try to create a training in 10 or 15 minutes today. So that'll be an interesting challenge. And then, at the end, we'll see how 314e is incorporating AI into our Just-in-Time training tool to make life easier for trainers and helping learners find the information that they need.

So what is AI and generative AI? So we want to go ahead and start with just a kind of introduction to the technology. So it's a very young technology, generative AI. It's only been out for less than a year. So open, or at least released to the public for less than a year. Open AI released about, it was November a year ago. So we're coming up quick on a one-year anniversary. But for this first year, it's been a huge buzz. But we've been using AI in our daily lives for quite a while. Siri, Alexa, Google Voice in our homes, we're able to turn our lights on and off. When we use maps, we're getting route suggestions and telling us how to get to different places. Amazon, YouTube is giving us suggestions based on AI. Tools like Grammarly or any text-to-speech engine or speech-to-text is using AI. But generative AI is something a little bit different. So the difference is, instead of working with information that already exists, generative AI is taking that information, and it's actually creating new things. It's creating a new article. It's creating a new song. It's creating a new story for your children. It also created my last marketing blog that was a little bit overdue, and we did it in record time. But humans and AI partnering is something that you're going to see more and more. Throughout this presentation, you're going to see AI-generated art, and I didn't sit down with any of the big art programs. Like you've probably heard of Mid-Journey, maybe Leonardo, that really artists are using, and they're coming in and creating very unique things. All the art you're going to see today was generated with a sentence or two and generated very quickly in these tools.

So, how familiar are you with generative AI? So we have a poll question. Amanda, can you put up the poll question? I need that. Thank you. Okay. It looks like people are coming in with all kinds of different levels, but we're tending definitely towards the it's very new. I've used it a couple times. All right. Looks like they've stopped coming in. Amanda, can you share the results? They have been shared. All right. So overall, today, it looks like most of us are kind of new to generative AI. We probably have a lot of questions about it. We probably have questions about what it can do and what it means. So we'll definitely tailor our presentation towards that today, although we will talk to all different levels and how you can use AI from the very start on through kind of a more advanced usage. Thank you for that.

All right. So, no matter what your thoughts are on generative AI, I think probably there's a lot of us here, and we probably have a lot of different feelings. I know I personally have had different feelings at different times about generative AI. Is it going to take my job is something that you hear quite often. The answer is probably no, but your job will be to partner with generative AI to actually produce a lot of the work you are today if you're working on computers and if you're working in healthcare. There's a lot of uses already for it. We're seeing it come in for it's helping to diagnose cancer. It's able to see in the scans patterns that the human eye has a hard time detecting. We've also seen that Epic is partnering with Microsoft to bring generative AI into the EHR. And there's a lot of different areas that we're going to see this generative AI come in and that we used to do that used to take us a lot of time. It's going to help us do it quicker, and it's going to help us do it better. But no matter what your opinion is, it is here to stay.

So, I love this chart. So, if we look at this chart, Facebook took ten months to get to a million users. ChatGPT, the first generative AI program that came out a year ago, reached that goal in five days. It is huge, and the benefits are outstanding. And we're going to talk about some of the risks and some of the questions we might have. But the benefits are just enormous. People are liking this to the invention of the car and the combustible engine. People are saying it's like electricity or the internet, the invention of the internet. Everything that we went through with the transition to smartphones, it's going to be all of that. And it's going to be rapid. It's not a question of the internet was adopted over several, several years. Well, both Google and Microsoft have announced that they're going to add generative AI into the Word, Excel, Google Docs, Google Sheets. By probably the end of the year, we'll see that. And so it's going to happen very, very quickly. And actually today, I was watching a summary of yesterday, when Facebook announced that they're actually bringing generative AI. They partnered with RayBans, and it's going to be embedded in glasses. So I'll be able to ask it what I'm seeing, and it should be very interesting. It's going to be an interesting few years, to be sure.

So, just a disclaimer. I don't know if this is a disclaimer or is this just a preview of everything to come. Portions of this presentation were generated by AI, but they were supervised by a human. It's a partnership between the tool and the person using the tool. It's capable of doing a lot, but instead of actually creating a lot of the content, I'm now actually supervising that content. I'm telling it what I want, and it's creating given those parameters. And we'll see that when we dive into actually creating a course. So always supervised by a human, but it is a partnership. And I thought this was a really interesting bit of information. So we don't have any statistics on training. Training is not leading the adoption of the tool so far. But where we're actually seeing a lot of adoption is AI used by programmers. So GitHub Copilot is an AI, and it's specifically designed for programmers to do Python coding and that type of activity. And there is a huge amount of adoption, and this survey comes from them. I think that this survey would look very similar to what trainers and other people will be saying in the coming months. I'm more productive. 88% of people agree. Less frustration with coding, 59%. More fulfilled in my job, 60%. Focus on more satisfying work, higher level work, 74%. All the way down to less mental efforts on repetitive tasks, 87% agree with that. And they're able to complete it faster, 88%. This is amazing, amazing results from a tool that's been released less than a year.

So, let's examine some real concerns. So, ethical concerns. So, let's put up our second poll. What are you most concerned about, about what you've heard from the tool?
And we have patient privacy, algorithm bias and fairness, liability, decision-making. Which of those resonates most with you? Very interesting results. So we'll publish here in a second. We've still got some answers coming in. All right. I think most of the back and forth has happened at this point in time. Amanda, can you share if you haven't?

So a pretty even split. It looks like all of these things are actually concerns. So, a little bit higher on the privacy and data security. But it's all within kind of an answer or two. So we'll spend time on each one of these. So, let's talk about copyright concerns. This one is, you've probably heard about it in the news. ChatGPT is getting sued. Dolly's getting sued. Claude's getting sued. So all of these different AIs are being sued right now by content creators, by authors, by musicians, by artists, by groups of people that held data. So these tools, they've been trained on about 20% of the internet, which is a pretty large portion. And just a huge amount of content goes into them. So when you're having a conversation with one of these tools, you're actually having a conversation with all of that data that's behind it. The tool's actually sorting through that, and it's actually responding to you, much like a human would respond with an answer. So when we're talking about copyrights, it's very important to understand that the way the tools work is they're not just taking snippets from those articles, or from those books, or from that music. It's actually processing it and making something new. So, just a really good example would be is you ask me about a video that I just watched, and I give you a summary of it, or about the last book that I read. And I tell you what I thought about the book and how it's helped me or influenced my life. The tool's going to do the same thing. It's not reproducing the book, but it is giving it a summary, just like a movie snippet or a reaction video. So it is creating something new, which is an interesting place to be. There's no copyright law as far as what is concerned around AI generation as of yet. It's too new. The lawsuits have been filed, but there's no case law to go off of. So we can look at what has happened with the internet, what's happened with other lawsuits around copyright, and make some guesses on what's going to happen. But we don't know at this point in time. But I think one thing that's pretty important around copyright concerns is that the copyright is being filed with the way that these systems were trained, not really what's coming out of it and what's being produced. So if I were to use the tool, it's more that whatever was put into it is being, there's a question on the legality, not really what's coming out of it, if that makes sense.

All right. So, let me show you this example. So, I actually used Cloud2 to write an icebreaker video to introduce legal concerns around the generative AI. And this is what it produced. So I'm going to go ahead and play that video, and then we'll talk about it.

Good morning, folks. Buckle up because I'm going to tell you a riveting tale about a company called Acme Corporation and their misadventures with AI. Acme was a hip, high-tech startup led by its visionary founder, Wiley Coyote. Oh, Wiley was a real go-getter, always hatching up harebrained schemes to take over the world. I mean, boost profits. So, one day, Wiley gets a bright idea. Let's use artificial intelligence to hire employees to build AI. Let's use artificial intelligence to hire employees. He imagined an AI system effortlessly sorting through useless candidate applications to find the perfect workers. No more wasting time on interviews. The computer would just know. Well, Wiley brought in a fancy AI developer called Tecron to build this system. Tecron warned it could be biased, but Wiley ignored him. Just get it done fast, he said. And before you could say meep meep, Acme's new AI recruiting system was up and running. Wiley eagerly offered it a giant stack of applications. Find me the best of the best, he cackled. But problems quickly arose. The AI was rejecting tons of qualified women and minority candidates for no good reason. And the humans who got through were duds. The AI had a knack for picking out kooky characters. Soon enough, Acme was getting sued left and right for discrimination. Employees were protesting outside Wiley's office, holding angry signs about the nutty AI. But Wiley just scratched his head. Unable to comprehend, his brilliant plan had backfired. Finally, Tecron had to sit Wiley down and explain. The AI reflected our own biases back at us. I did try to warn you. Wiley realized he had been foolish to only care about speed, not ethics. Acme now had to fix this mess. And Wiley's reputation would never be the same. The moral here is that while AI is powerful stuff, companies can't just blindly barrel ahead with it. You gotta stop and think, is this fair? Transparent, safe, that's what this training is all about. So let's dive in and make sure your AI plans don't go kaboom like Acme's did.

All right, so that little story in the chat box, is that a copyright violation? What are your thoughts? So the story that it made, does it violate copyright? Specifically around Warner Brothers copyright? So, in the chat window, just type yes, no. Any other thoughts? Joel says he thinks that using the Coyote does. John's giving it a yes. Anwar, yes. Okay, Kevin's got, there is no copyright after AI. Like that perspective. I didn't choose the characters. Got a Acme that definitely violates copyright laws. Okay, so this was kind of interesting. And yeah, Wiley Coyote character is definitely copyrighted by Warner Brothers. So now the interesting conversation, I think, is that if I just use this in a training class, or if I actually would have used images of Wiley Coyote from Warner Brothers, that definitely would have been a copyright law violation. But since this is not a commercial class, that I actually have a little bit more freedom. But what it generated, I actually asked the AI if it was a violation of copyright. The AI said that it was. I thought that was interesting. And the reason it is, and it actually suggested that instead of Wiley Coyote, that we called it Foxy the Fox. And it would have been a very, very different story. So by changing it, it actually removes the question of a copyright violation. Also, since this is a training class where we're learning together, I'm using it for education, which then actually gives it into the fair use realm. So, copyright is always balanced by fair use. Did I make enough changes, or am I using it in a way to educate, puts it into fair use. So I think we could probably do an hour on copyright concerns. But the short answer is that anything that's coming out like this, but this is the same results I would have gotten from the internet if I was searching. So my life really hasn't changed that much, except now I have a great partner that's able to make much better stories than what I just could by searching on the internet and putting it together. This entire thing was created, I think I created in less than 20 minutes, 15, 20 minutes, even with the art.

All right. So we're gonna move from copyright concerns being outputted to copywriting content that is coming out. And I think I saw a couple of comments in the chat about that. So, what is generated by AI is not copyrightable the way that it's coming out. So I actually asked Simplified AI to make me a meme, a funny meme about AI and copyright, and this is what it generated. So this was just a one prompt generation from Simplified, and I thought it was funny, and I love it, I'm gonna use it. So here it is in the presentation. This is not copyrightable. The machine made this entirely. And I don't know if you heard of the lawsuit of the monkey selfie. A photographer was taking pictures and dropped his camera. The monkey picked up the camera and accidentally took pictures of itself, and it's called the monkey selfie. And what happened is there was a lawsuit on the question of the photographer used the photos because they're fantastic, but PETA actually sued the photographer, saying that the monkey owned its image. Very interesting legal battle. You can look it up or have generative AI give you a summary of it if you don't have a lot of time. But basically, what was ruled is that only humans can copyright material. So, AIs cannot copyright what they generate. If I change what the AI is giving me, so if it's giving me seeds for a presentation, for example, so it's giving me information, and I'm taking that, and I'm changing it significantly, then I can actually copyright it. But when it comes out of the machine, it is not copyrightable.

All right, then we have my Picasso type of AI-generated art. So I asked it to make me art about hallucinations in the style of Picasso, and that's what we have here on the screen. This image has never existed before. This was purely created by AI, not copyrightable, definitely doesn't violate any copyright laws unless the way they trained the art generator is using people's art to do that, which is very questionable. So, avoiding false information, biases, and hallucinations. You'll hear quite a bit about this, especially on the bias front right now. And when we're talking about bias, these tools were, most of them were generally created in the United States. So, they have a Western influence, and they also have an English bias. And I think that's natural. And just as if it was made in China, I would expect it to have a Chinese bias. But once they were released a year ago, they actually have been starting to be used worldwide by a lot of different people in a lot of different languages. So even though the initial information they were trained on was English Western information, it's starting to contain a lot more. So we're gonna be using Google's Barb, which is a competitor to the open AI chat GPT. And that now has over 400 different languages in it. And it does a really good job in about seven or eight of those. So, and so when people are talking about bias, there are natural biases that are in here.

Let me give you an example of a bias that I've seen. So when I asked it, who are the ten best poets that have ever lived? It gave me a list of 10 fantastic poets. And nine of them were from the West, and they were English-speaking people from the West. So, at that point in time, I was like, okay, this isn't quite what I wanted. And this definitely is a Western bias. So I asked it, who are the ten best poets to have ever lived in the world? So, I changed what I was asking. And then it gave me a list of a bunch of poets that I'd never heard of from a lot of different cultures that would be interesting to see what their poetry looks like. So, part of the bias was just that it assumes that I am an English-speaking person in the Western world, but I can actually train it and by asking better questions that are coming out.

And hallucinations. Hallucinations are when the AI makes something up. It just flat-out makes something up. So, an example that I've seen that, and kind of the trouble with biases and hallucinations, as people are using it, you actually give feedback to the tool. So it actually learns, and it gets better. So, an example that was a good example yesterday is not gonna be a good example today because it's learning, and it will give better information the more and more people use it. But a hallucination that I saw was someone asked for a... They wanted a unique chocolate birthday cake, and they wanted the recipe for it. Well, it gave them a recipe. And in that recipe, there was some very weird things like avocado, not something you would normally want in a birthday cake. But it also kind of depends on the prompt. We asked it to make something very unique. A chocolate avocado birthday cake would be very unique, but the human really has to look at the information that we're getting back to make sure that it's correct. So, and that's kind of the same as the internet. The question is, what is our expectation of a tool that is meant to have conversations with us, both factual and infactual conversations? If it's supposed to be purely factual, it's not. And it really, so it doesn't really understand kind of what's a good idea and what's a bad idea. And sometimes, it makes things up when it gives responses. But as I said earlier, I'm responsible for the information. I'm managing what it's giving me. And when it gives me information, it's extremely competent. But just as if I had a junior team member that I'm working with, I'm really gonna look at the work that they're giving me as well on a project. So, they're not replacing people on the project, and it's best to treat them like another person on the project that's giving you information. And then securing business data.

So, the question is, if I put something into ChatGPT or BARD, is it private? And the answer to that, the short answer, is no, it's not private. And you're not paying for the service, and there's the old adage that if you're not paying for it, paying for the product, you are the product. So, as you are giving information, they're not sharing that with other people, but they are training the system. And what that means is that if I actually put proprietary information into the tool, it might show up in some other way. As the machine learns, it could accidentally give that out. And the programmers are seeing that information as well. So, it's not secure. So, the way to get around that is using the free tools for free things. You're looking at that 20% of the internet, and you're pulling that, and you're working with it. So, you can definitely work with the free versions. But there's also paid versions. You're getting your own version that you can actually enter your data in. So, you can either set that up through ChatGPT, just release ChatGTP Business. You can set that up internally, or you can work with companies like 314e, where we actually give you your own large language model and all of your information is secure. It's your own version. It's not shared out. So, securing business data. So, if you wouldn't put the information in an email, I wouldn't put it into a non-paid, a non-secured version of any of these tools.

All right. Let's look at AI in the classroom. So, we're going to create a training using ChatGTP, but we're actually going to use BARD in this case, one of the competitors. And there's also CLAUD and LLAMA are the big ones that are out there. So, you've got CLAUD, who actually just got a $4 billion investment from Amazon. You've got ChatGTP and Bing. So, they're working together to create their large language model. BARD is Google, and you might hear LLAMA. And LLAMA is Facebook or Meta. So, you'll hear that out there. There's going to be a lot of them. So, they're all, and they jump back and forth. Every day, somebody releases something incredible. And I'm like, oh, this one's much better. And then the next day, someone will release something. So, different ones do a little different or excel in different areas. But we're just going to use BARD today. They had a new release, and I like one of their features in particular. So, this is BARD, and it's available. It's not a paid version, but you'll see that it is experimental. They are still learning. They get smarter and smarter every day, and they get more amazing. So, first thing we're going to ask it is, are the Denver Broncos going to be good this year? And we are probably not going to like its answer. Unless you're a Kansas City Chiefs fan, then you probably will like its answer. All right. So, the Denver Broncos have a potential to be a good team in 2023. They have a lot of talent on both sides of the ball, and they made some significant changes to the coaching staff and the roster. So, it's actually given me this information, and I can actually have it tell it to me here if I click on that. And one of the new things that came out with BARD, and this was on Friday, is now I can double-check response. I love this. And this is why we're using BARD today. So, what BARD's doing is it's going through and evaluating the answer that it's given us. So, it's in green, so it found similar content. So, it's confirming that these different elements that it's made are good content and that we can find that out there. If it wasn't sure of the content, it would actually be in orange. So, if we go ahead and take this, let's see. Are the Ravens, Baltimore Ravens, going to be good this year? And it remembers my first bit of conversation, so it knows that we're talking about football. So, I didn't have to type in Baltimore Ravens. It gave me the response here. I'm going to double-check what it says. It's looking at it. And once again, this is all in green. When I did this test earlier, I definitely had orange pieces on here, and it looks like it's taken them out now. As I said, for these examples, they change all the time. It's learning as we go through here. So, I do want to give us a – I do want it to pull up. Let's see if I actually give it – who's going to be bad this year? Let's see. We'll say – what was that? The Minnesota Vikings. The Vikings? Okay. So, what it's doing, it's created this response. And we, once again, have all green information. So, it's learning as people are actually hitting this, and then they're going in, and then they're looking at the different information. It's actually taking things out of these conversations. So, it's already learning since Friday. So, it would look the same as this, and when I click on it, it would say that it found additional articles that do not support this. So, it's a very, very great tool as far as checking information. But as I said, just like a junior developer or a junior person on the team, I'm going to actually need to look at this and make sure it's correct. So, Minnesota Vikings have the potential to be a good team. I think there's several people that might argue that. So, it's giving us information. We need to be able to consume it and determine what's good.

All right. So, that's an example of hallucinations and what that looks like. But let's go ahead and create some content. So, I'm going to go ahead and create new chat, and I'm going to give it a prompt. I've got all my prompts saved over here on the side, so I don't have to type it out. If you're an instructional designer, please create a one-hour virtual training course on how to create guard lines using generative AI at work. Go ahead and hit enter. So, it'll actually create the course for me. I've got an introduction, learning objectives, module one, introduction to generative AI, guardrails. It looks much like what we're doing today, even developing guardrails for your organization and implementing guardrails. That looks like a pretty decent course. So, what kind of Socratic questions could I ask during that course? So, questions to the class. What ethical considerations should we keep in mind when using generative AI? How can we ensure generative AI is used in a way that's fair and unbiased? So, it's given me questions that I can use in the class. Now, I want to add those Socratic questions to my outline. And this is a pretty normal training development process. And now it has put it all together. So, I want to add one activity for each section. So, we'll need some interactive. So, participants will watch a short video about the potential benefits and risks of using generative AI. So, very much like the video I entered in to show you. So, I've got some different activities here. And I want to add those activities to include them into my outline. So, just real quick. So, I'm making a longer outline. And I want to create an icebreaker for this class. It's given me an icebreaker that we can do as a team. What competencies would be used for each section? All normal instructional design type of questions. I want to create a multiple-choice test for the course. I've got five questions here. I want to check those, make sure they're what I want to ask. And I want to create an enrollment letter. So, create an email that I can go ahead and put in and send out to everyone. It gives me course name, date, time, location. Please log into training platform. At the scheduled time to join the course, this course will teach you how to create guardrails. In this course, we'll learn about these things. And let me go ahead, and we'll just skip over some of this for the sake of time. But we can ask it to create letters for the day before. We can actually ask it to what else can I do to improve this class. It will give us suggestions. So, here's a survey for the end of class. And I thought this was kind of interesting. Because you can even do a level two survey. So, instead of just the impressions of class, one month after class, I want to send out this letter and ask them how they're actually using the learning. They didn't know what the Kirkpatrick level two was. So, it was all about applying what they learned. Okay. I have applied the knowledge. I'm more confident in my ability to use it. I would recommend this course. Okay. And then it's giving me additional tips for sending out feedback. So, it's actually helping me with ideas that I might want to think about in our partnership. And then, of course, can you help me put this into a PowerPoint? And here's what goes on each slide. And then, of course, using the generative AI, then you saw quite a few of the images that I made. And here's how I made those. So, create a picture of a road with guardrails to be used in a PowerPoint. It's generating right now. I'm going to Bing image creator. There's monkey selfies, by the way. Who doesn't love a good monkey selfie? So, create a picture of a road with guardrails to be used in a PowerPoint. It's generating right now. And any of these would be good guardrail pictures that I could put into a PowerPoint that I'm giving. So, very, very easy to use. And you'll hear a lot about prompt engineering. I would recommend just get in and type things and see what happens. You can always ask follow-up questions. That is one of the fantastic things about the generative AI compared to the internet. On the internet, I have to be very, very exact in what I'm looking for. And then, if I don't get good results, then I actually rephrase what I'm looking for. And then I rephrase it. And every time I'm starting new, with the generative AIs like BARD, it's learning from what I'm asking. And it's getting better every time. So, that if I'm having a conversation with it and I don't get exactly what I want, then it will learn more. And it will build on our conversation as we're going through. That's something very new and very different.

So, let's go ahead and go back into our PowerPoint. So, current awareness. We have a third poll about using the AI content creation in class. Now, as we are going through that, we weren't focused necessarily on EHR training, software training. And that's going to be because our EHRs don't have the data loaded into the public large language models. So, while we can actually create a class on sexual harassment, on COVID, on safety policies, on OSHA, on all of those kinds of things that are in the public record, our EHR catalogs are not loaded in. Because a lot of that's private information. But could I still create a letter? Absolutely, that I could send out from email. Could I take my curriculum that I'm using in class without screenshots and load it in and create test questions and create a lot of that content? I absolutely couldn't do that. Or I could not look at a private model, which gives me a lot more flexibility on the data that I put in. So, using it in EHR training, it looks like we have a good mix of answers. But most people are not very familiar with all about using EMR in an actual training class. So one of the things about AI, AI is amazing. And this is AI-generated art. Once again, just one command string that generated. I asked it for a doctor with a word bubble that I can type things into. So, this person is actually supposed to be a doctor. Can't really tell. They are wearing a tie and a coat, probably. But it's good enough. And for my purposes of creating content quickly, it definitely is good enough. So now we're going to actually move a little bit more from a lot of the concepts and show you how 314e is adding AI to what we're doing with our Just-in-Time training tool that delivers microlearning. And so very, very interestingly, 80% of chat GPT users are what people would describe as professionals. So, the chat GPT is a public language model. And really, there is a huge need for being able to enter your information in a secure environment. We saw that when chat GPT was released. We actually re-engineered our software completely after the release last November. And we have a product that we're going to show you today. It is a private model that uses all of that same technology that you saw on any of our examples. But it's behind firewalls. It does have security. It's only your data. Each one of our customers has their own instance.

And let me go ahead and show you a couple of ways that we're using AI with Jeeves. So, Jeeves delivers documents and videos to people. And the search engine is very, very key to making that happen. As I have a question, I can type in my question here. And it will search the entire catalog. It's very different than a lot of the searches you're probably used to because a lot of searches are built off of the title or a description. Well, in our large language model search, we're actually searching the contents of the video, so the transcript of that video. We're searching the entire document. And we're looking at the words and the words, how they're put together to generate that content. So it's just not a hit on the words, but it's how many times are the words used or words that are similar to that word to pull up the information. And then, our algorithm pushes the results up to the top based on where you work. So, search, how do I load an insurance card? So it's how to enter insurance details. That seems right. Patient visit history. That seems right. So, I'm going to go ahead and click on the first one. And now our AI is bringing up a video that was suggested by the AI. But notice that the AI has put markers down at the bottom on the video, talking about any of the words that I'm searching for, the pertinent words, are actually located in this particular video. And since it's a video on insurance information, there's quite a few hits. And that's why it's the number one result. So we're using AI to do our searches. We're using AI to put these markers in for us. In addition to that, we're using AI to run our chatbots. So you can have a conversation with the content that you've loaded into your system. So, how do I create auto-text? So that's a question for Cerner. Cerner uses auto text. Epic uses smart text. So we have some information, some Cerner information. And this one's not pulling up. And I'm going to check and make sure that everything is selected. And I'll just select all categories. And maybe we don't have any auto text in this environment. All right. So we'll ask it about flow sheets. OK. The last update on flow sheets is that related to the flow board for clinicians. So, actually, a lot of the information is on flow board, which is included in this. So, the last update it's giving me information here. And I can click on View the Flow Board Information. So, actually, go into the video or the document that it's pulling this information from. And it's going to pull it from the flow board. It's pulling this information from. So, I'm having a conversation with the content that has been loaded in. It's not out on the internet. It's private. It's not shared. This is your organization's information. So, let me show you another way that we're using AI. So, one thing in health care that we do quite often so we make video tip sheets. But some people like paper tip sheets just as well. I think this is really an amazing use. To make a how-to guide really takes about 30, 45 minutes to do. And with our automatic AI tip sheet creator, we can create that at a fraction of the time. So, we just released a new version of asset versioning. So we'll just grab this video. One new feature that is. And there's Amanda narrating the video on how to do this. The AI has gone through the video and pulled out all of the action items. I can now move those over into the steps. And as I'm watching the video, and I'm making sure these steps are correct because, remember, you're partnering with the AI, but you're managing it. As it's going through and talking about different things. So here we are. We're clicking on the Edit button. Click on the Edit button. And I want to go and capture the screen. So I capture screen, drop that in along with the steps, make sure the rest of the steps are correct. And then, as it goes through the video and I'm listening to the video, I can grab different parts and just throw the screenshots in. Kind of rinse and repeat this throughout the entire process and to build out my tip sheet. So, this particular video is two minutes long. It's probably going to take me five minutes to verify all of this. I can add extra steps if I want to here. I can import my own screenshots. If I wanted to take these and mark them up in a different tool, I could do that. And then I'm going to simply save it. And here's the tip sheet that we were just working on together. So very easy to do. It could be rebranded. And as I said, this is all your information. So, the AI does the search. The AI adds closed captions to all of our videos. And it's 98% accurate. So I don't have to do that. When I create a five-minute video, it takes me five minutes. And I can create a tip sheet to go with it in another five minutes. That allows our developers to create a lot of content that's specifically about people's workflows very, very quickly. And that's how we're using AI to power Jeeves.

All right, and let's go ahead and take some questions. And Amanda, what have you seen in the chat room?

Amanda Gustafson 53:18
Sorry about that. I was still muted. So, Bill was wondering if you could recommend a starting point that would enable us to begin using AI to assist with EHR training needs.

Ryan Seratt 53:30
Yeah, I think that the starting point would actually be defining how your teams are going to use AI and what's acceptable and what's not. I would definitely want to create a training to protect data and what you're going to be using the private AIs for. So today, you could definitely send out letters to people. You could have it create welcome letters, reminder letters. You could have it make tests off of some of your content that's not sensitive. You could also have it create icebreakers if you're doing live classroom or if you're doing virtual classroom. So there's a lot of content that you can do, or have it recommend articles as well. So as you're going into those trainings, you can use it to pull a lot of the research that you're going to do, help with your scripting, or even make role plays if you're going to do role plays in class.

Amanda Gustafson 54:29
Thank you. Great. Are there any specific tools that you'd recommend or services?

Ryan Seratt 53:34
I would definitely recommend the 314e Jeeves tool. I think it's a fantastic tool to make content very quickly. Other tools, definitely take a look at ChatGTP and all of its plug-ins. Take a look at BARD and what it's able to do. The DALI for images is fantastic. Situation Expert for actually creating thought process training is another great tool, along with Vyond. Vyond is great at making cartoon kind of dialogue interactions for training. And they're doing some fantastic things with AI as well. Great. And when we were looking earlier the BARD questions that you were asking earlier, assuming you walk through those same questions on the same tool beforehand, did you have to correct it with any output beforehand? Or were you able to just wing it and put those questions right in there? Yeah, so I had actually just, as I created those, I just kind of went through my normal thought process when I'm creating training. And then, I just documented which questions I was asking. And put them in another document so I could just kind of paste them in, because I'm not a great typist as we're going through. So, I'm sure I did learn through that process. But you can always ask it to be more specific, to use less words, to tell it to me like I'm a fifth grader, tell it to me like I have a master's degree. So you can give it all of those commands and it will go through those. We've got another question. How long does it take to train the model on organizational data? And what does adding new assets look like keeping things up to date? Sure. So the nice thing about video and Jeeves keeping things up to date, I'll address that first, is that maintenance is always an issue when you're creating content. The nice thing about video is I can actually take the closed captioning, the transcript of the video I just made, I can make any changes that I need. I can go into Cerner or Epic, and I can rerecord that, and I can replace an asset in five minutes. Very easy to do. A five-minute video, you don't edit, you throw it away, and you make a new one. So that's the maintenance strategy there. The kind of the second part of training is every piece of data we load in is an element that it interacts with. So there's not a lot of training back and forth. Just the larger the library is, the more information we're putting in, the better that it actually works. So, since it's your own instance, I guess the framework is all there from the work that GTP and the others have done. We're taking that framework, and we're taking how it looks at data, and we're applying that learning to your data. And so your data is filling out the content, but the pathways are already there.
Fantastic. Those are all the questions that came through.

Great. Any other questions? And I'm on time for the first time in my life. It never happens. If you do have other questions, please feel free to reach out to me. Here's my email address, my LinkedIn, and our website. And we'd be glad to have further conversations with you, answer questions you have, get a virtual cup of coffee, do a more in-depth demo of Jeeves and how our other customers are implementing that to make sure that their teams are able to find the information they need and they're able to perform well. So, thank you for your time today. Appreciate you all.