Video: Free CE Course: AI Tools to Support Your Private Practice | Duration: 5516s | Summary: Free CE Course: AI Tools to Support Your Private Practice | Chapters: Introduction and Disclosures (272.815s), Course Completion Requirements (460.71002s), Sponsor Approvals (519.63s), About Dr. Lindsay Oberleitner, Instructor (560.935s), Course Overview (641.485s), Overview to AI and Mental Health (741.24s), Range of AI Applications (1608.735s), Decisional Support on the Use of AI in Practice Chapter (3137.205s), Case Examples (4665.02s), Steps to Implementing AI in Practice (4782.875s), Q & A (4839.505s), Closing Remarks and Reminders (5421.78s)
Transcript for "Free CE Course: AI Tools to Support Your Private Practice": Good morning or good early afternoon wherever you are joining us from thank you so much for joining us today for this CE webinar on AI tools, AI tools with your practice. My name is doctor Lindsey Oberleitner. I will share a little bit more about myself before I get into the material, but just wanted to share, a few housekeeping things about the seminar today, before we get started. So very first, I want to make sure you know, I see some of you have seen, our chat function off to the right hand side of your screen. Please feel free to chat with each other. In fact, there'll be some parts of today where I'll encourage you if you have some information about things that you'd like to share with each other. But I do want you to know that that part of the, right side bar is not monitored as closely as our q and a. So if you do have questions for the end of this, this, webinar today that you would like me to answer, you need to put those in the q and a section. So you'll see just a little farther over a little tab to go to q and a. And if you see questions you really wanna make sure that I get to and can answer, please make sure to upvote those as well. There's also, just so you know, a docs link up there where you can get this whole PowerPoint, as well as references for the presentation. So please make sure to access those if you would like them as well. So with that, I want to, jump into our slides. And the very first thing I wanna mention is disclosures. So we'll go to the next slide, which is just to say that I, as I will share in just a moment, I do work at SimplePractice, but this presentation is specific to my own, expertise and my own presentation regarding AI and professional context. This is a continuing education course, so this is not focused on simple practice tools. That's not what you'll be hearing from from me today. We'll talk much more broadly about the ways we can think about integrating AI into practice, some of the ways you can think about making decisions for yourself and your own practice about how you may or may not want to use AI, and some of those very specific concerns. So I do also wanna mention, tying into this piece, is that in that q and a section, I'll be able to answer those kind of bigger, broader things when you think about continuing education courses, right, that come to your professional context, how you think about practice, big questions like that. But I won't be able to in this continuing education course to talk specifically about products here or even other products. So just wanted to make sure to mention that as well, and we'll go to the next slide, which is a little bit of information that you that I'm sure many of you are interested in how you will actually get that CE credit. So first is viewing this, in its entirety. Whether if you need to step away today for any reason, there will be a recording available, and you can complete it that way, but you do need to view it in its entirety. Then, you will access a link which we will provide to you that will ask you to attest that you have viewed the entire course whether live or on your own time, and then complete the quiz with at least 70% completion correction, the course survey, and you will then receive your credit. I do want to mention that that CE certificate will not be immediate. It will come, within, a little bit of time following. So if you don't see it come through your email immediately, that's because it will take us just a little bit of time to process and make sure everyone's information is accurate. So there will be a brief period of time that you will not have it. And if you do have questions, you can reach out to us with those as well. And to the next slide. This is just a list. I wanted to make sure everyone saw it here as well, but of our current sponsor approvals. What I will say and, again, this is in the docs section, so you can download this and look at this. I won't read these all through. But what I will say is if you are a licensed type that is not listed on our current approvals, I do encourage you to look at your state board and see as like myself as a licensed psychologist in Michigan, I can use, some of the other boards for my continuing education. But because that is unique to your state, we do encourage you to look at your own board and see if it works for you. On to the next slide. So a little bit more about myself before I jump into the material. So I am the head of clinical strategy here at simple practice. I'm a licensed clinical psychologist. I love seeing I wish I could write back to all of you as I see all the different locations. I myself am I'm located in Michigan also was licensed and practiced in Connecticut for a decade before moving back to Michigan. So I saw a few of both of those states mixed in there. I previously, worked most of my clinical and research work was at the intersection of addiction, trauma, and chronic health conditions and looking to build equitable systems that helped individuals who were less likely to receive treatment in many of those with many of those conditions. I've previously been faculty at the Yale University School of Medicine, Oakland University, School of Medicine, and and helped in the development of an addiction studies program at Western Connecticut State University. Additionally, clinically, I, was a director at a forensic drug diversion program, where I managed a lot of prison reentry and individuals who were referred from court and looking at how we can holistically treat individuals and run a small private practice. With that, we'll go to the next slide. I wanna tell you a little bit about what you can expect today. So first, we're gonna start with an overview. Again, some of you joining me today might be using AI all the time in your personal life and maybe even in your practice at this point. But we know not everyone is. And in fact, some of the data I'll share shows that many people don't feel like they have the knowledge they need yet. So we're just going to talk a little bit about how many people are using AI, how are they using it, and how can we think about the structure of it. We'll then talk about some of the range of AI applications in practice. And as we talk through that, again, I will be talking about how do we think about decisions that we make and what some of the unique pieces of evidence are across these different areas of use. And then we'll talk about how you can think about making decisions. Is AI right for you and your practice right now? If you've already decided that and the answer is yes, then also how do you think about what tools and how do you evaluate and think about making steps forward? And then briefly some case examples and steps to implementing and we will end as I mentioned with question and answer as well. So the learning objectives, a little bit of a repeat of what I just said so I'll go through these quick which is to think about the ways that you can use AI in clinical practice where we have evidence, where we don't where we maybe don't need the same evidence as some tools as others. We'll talk about those major categories of use, think about some of those ethical considerations, and, make plans for the ways that you might continue to evaluate and decide if AI is right for you in your practice. So let's start with a little bit of overview. Again, I know many people here may have some background but just to make sure we're all starting at the same point, I want to first talk a little bit about what is it when we mean when we say AI. One of the things that I hear often happening, and we can go to the next slide. What I often hear happening in the field and even public in general for all of us, for myself included, is we think about AI. And there's kind of this big jump from what AI is all the way to some of the things like chatbots. There is a huge open space, before we get to kind of more advanced interactive AI tools. So I do wanna make sure that we think about this as we're starting in that full frame, that AI is any of these technology tools that give us that give the ability to kind of bring together things that have typically been thought of as human intelligence. Right? The way to draw patterns, make connections, make inferences, all of those things. And that's a big open area, as we'll talk about in just a moment when we think about the ways that it's touched our lives, probably all of us at this point, even if we are not actively choosing or seeking out to use certain tools. When we go all the way forward to something like LLMs, large language models, that's where we're talking more about that ability to kind of interact and connect and kind of live time make those judgments, use human language, interpret it, and continue to connect along those lines. So again, not to go into much detail here on the technical side, but just again to think and start from the very beginning today that we're talking about the whole range. And there are a lot of different ways that we can think about AI use in practice and in our personal lives. So let's think first about what is AI usage like nationally right now. So one of the important things is realizing it as I already mentioned, there are lots of ways we're using it where we don't actively make the choice already. If we're going to that weather station and getting those updates, it's being used in forecasting. It's being used in, situations where sometimes you're reaching out to customer support. Hopefully, I had a wild episode of grocery delivery this weekend where things had spilled all over, And that very first connection, right, is with AI to help assist, to get help faster. There are lots of ways. That was not an active choice of mine to seek help that way, but that was the first, connection that I made. Online shopping recommendations. I'm sure we've all seen how good some of them can be where you don't even realize you're looking for something, and there it is. They show you something that matches you perfectly because they've been thinking about the different ways AI has been able to interpret different ways and things that you're interested in to give those recommendations. So those things are impacting us all in lots of different ways throughout our days. Recent research, go back for one moment, by the Pew Research, group suggests that fifty five percent of adults acknowledge. Right? So I think that word acknowledge is really important here because, again, AI is in so many things that we're doing. But acknowledge an active use of AI in their lives. And whether we are aware or unaware of the role of some of those pieces of AI in our life, it is increasing. And so are concerns, as the Pew Research study also showed. There are very fair and valid concerns about the ethics and limits and considerations when you're thinking about using AI. And so next, my thought was, okay. So we know that it's being used in lots of places. What about the active use of AI in health care broadly? We'll talk about mental health in just a moment. But thinking about health care and our systems, how do physicians feel right now about the integration of AI? How do, potential patients, really, any of us, right, when we seek health care, feel about the use of AI in health care. When we think about it broadly, the American Medical Association survey showed quick, growth in the use of AI from twenty twenty three to twenty twenty four, an increase of 80% up to over half of physicians using AI in their practice in some way. Again, these ways might vary widely. They might be responses to emails that are generated by AI. It might be as advanced as looking at, as we've seen growth of things like really solid work on interpreting radiology results. So there are lots of ways, and that range is quite wide. But nonetheless, there's an active choice to use it in some way for documentation, different pieces of practice. So this survey did not get into the depth of were they using it for diagnosis or intervention versus kind of those more routine tasks. But I would venture a guess that even within medicine, it is more likely the most common use still is within that kind of reducing some of those administrative burdens that people experience. We also think about patient perspectives. A Pew research that same Pew Research study showed that forty, that I talked about a moment ago, showed that forty percent of adults are okay with some level of AI use in health care. Again, if we think about that spectrum that I started with, that doesn't mean that forty percent of adults are saying they want to have a physician who that's their first point of contact as AI. What they're saying is they're okay if it's helping with documentation. They're okay if it's helping with making diagnoses. They're okay if it's searching electronic health records and looking at other issues and identifying things like social determinants of health, right, that might get missed in clinical practice. And then another 40% also believe that it actually could improve health care in those ways. So I'm reporting the percentage that say they're hopeful about it. That does still mean sixty percent of people, a large number, still feel a lot of caution and not hope. And also over half of the sample thought that AI could help reduce racial and ethnic bias. It's a really important question as we consider continuing growth within mental health care and physical health care because we know that AI does have the potential to actually increase bias in some ways, but it actually also has the potential to decrease it in other ways, which we will talk about a little bit later. And this number, because it changes so quick, is already, a little, I'm sure, much, much larger than what you're seeing here. But even in mid twenty twenty four, there are already 800 AI driven medical devices that were actually approved, So many, many more that are not yet approved but in development. So now let's talk a little bit about AI usage by mental health professionals. So sharing some of the results from American Psychological Association's survey of psychologists last year, last fall, and showed that just over twenty percent of mental health professionals reported using AI in their practice during the past year. This was any usage. It could be helping them do things that didn't directly touch patient care, just using AI at all. So that number is quite a bit lower than when we think about health care broadly and for good reason. Right? Because there are a lot of concerns when we're thinking about privacy and we're thinking about it's not as simple mental health care as what do we do, what's the best next step for this compound fracture. Usually, what we're dealing with is quite complex. So being able to effectively build those tools, there are a lot of concerns that mental health professionals, myself included, have about ensuring that things are built the right way. A few interesting notes from that survey that I thought was worth, bringing up, which is early career psychologists, so those in the first ten years of their practice, were most likely to report some amount of AI use. Only just about half reported having some knowledge of AI, meaning the other half did feel like they didn't even have the knowledge they needed to make informed decisions. And, many people were unsure of the benefits of AI to their practice. In part, we think about this kind of I think of it as kind of a trickle down. Right? If we don't have the knowledge, we don't even know what it could do or what it shouldn't do, both sides of that picture. Right? But some of the most common concerns were potential social harms, biased inputs, lack of testing, all things that we will continue to talk about as we think about how we evaluate these tools. The other perspective before we wrap up what usage looks like right now is thinking about how are people seeking help via AI right now in the community. And we can go to the next slide for this one, which is who's using it and how. And what I will say is the research varies widely when we think about what some of the industry reports are, what we're seeing within individuals seeking mental health care, how often they report using it versus the broad community. I will say, what I think one of the biggest things about why we see numbers widely varying like this is it's how we're defining what using it for mental health is. And I think one of the things that I'll bring up later is kind of how do we think about even defining when it's mental health support versus when it's just seeking support like we would from a friend or expression or just having that space. So, when we think about these different ways of looking at it, one survey found that twenty two percent of adults have used AI for mental health in some way, and half were interested in doing so. But then when we think about some other perspectives, another study showed that seventy nine percent of US adults reported being uncomfortable with the thought of something like a chatbot for mental health. And another industry study, so I wanna denote that this is coming from a certain perspective, is that, for individuals who were actively seeking mental health care, those individuals were asked how many of you have used AI to seek support, and they found that nearly half of individuals had used it in some way to seek support. So again, really wide range, but what we can conclude is it's growing, it's growing rapidly, and there are a lot of questions about when does it cross that line If we think about a continuum from support all the way to a mental health tool that we would actually consider it a mental health intervention. So now, let's talk a little bit more. We can go to the next slide about why has AI been growing so rapidly in the mental health space. And I think there's a few drivers when we think both about drivers for clinicians ourselves and we think about community drivers of, like, people being able to seek care. I think some of the big drivers are demands are really high. I think all of us, everyone listening today, I'm sure can attest to those documentation demands being one of the hardest pieces to continue to keep up with. And I don't just mean notes. I mean everything we need to do to record, the time that a patient seeks care all the way through, the time that we discharge them. There's a whole lot that we need to do and a whole lot that we need to manage, and those demands have not been easy to reduce over time. Right? We all find ways. I can remember going back dating myself a little, but in the community mental health center I worked at, sitting around a circle of paper files just crying one day being like, I can't sign all these notes. I can't write all these notes. They were handwritten still, and it was a huge chore. Even as we see improvements, those documentation demands do make it interesting to see how can AI assist as we bring in clients, as we're doing all of those steps. There's also been rapid growth of technology in the field. When we think before 2020, before COVID, how rare telehealth use was. I had I remember working on a study where we were doing telephone support for youth with diabetes, and it was such at that time seemed to be an innovative approach because there was so little option for individuals to get care in their, own homes that fit that fit their demands, that, made it feasible for them to access care. But we're getting more and more comfortable with those things as well. I'll also say one of the clinician drivers. This is new for me that I actually looked at the number of hours in a week, but I have told probably every client that I've ever worked with that meeting one hour out of the hundred and sixty eight hours means that there's a whole lot that I don't get to see, and there's a whole lot that is really on their choice and their volition on what they wanna see the outcomes in treatment. And I think one of the areas of growth that we see for AI too is what could it do to assist in being able to uphold through those commitments that you make in session throughout the week. So that's one of the areas that I see as a potential, space where we see some growth. Also, when we think about community drivers, I think one of the really important things again, there's I I already talked about the range from support through actually kind of intervening. The other thing to think about is the number of individuals who don't don't receive care. So from the national survey on drug use and health last year, it was found that fifty five percent of people with a mental health diagnosis met criteria for a mental health diagnosis received an OCare. It wasn't all because they were actively seeking. Some didn't feel like they had the time. They didn't have the resources because of stigma. They didn't know how to pay. Some didn't know that they needed care. They just felt distress. And I do wonder as we continue to think about how we better meet needs, when we think about the continuum of severity and the continuum of who's willing to walk in our doors, where there are potential roles for AI as well. We'll go to the next slide. So with that, that gives you a little bit of sense of kind of where we are, some level setting on where we see AI right now, why some of the reasons and how much things are getting used. But now I wanna talk a little bit about a range of AI applications. In this section, I'm gonna talk kind of at a high level. I will say that what I do encourage and I love I wish I could I can see the chat going, and I wish I could read it. But, what I would say is that if you have particular tools as we're talking through these different layers of how you might use AI in application. I, as a, of course, can't give you those specific, like, here's a company doing this or that, or here's how I've really loved it. But it really is a great chance for you all to connect with one each with one another and share if there are things, ways you've done it that you would like to to share and chat. So please feel free to talk with each other as well. So let's go to the next slide. What I wanna start with is every time I think about AI and its growth in mental health care, I think about there there are some things that we've known for years. Right? It's been twenty years, since I started grad school. And I can remember some research studies, for example, that talked a whole lot about how algorithmic predictions using data to predict diagnosis and predict outcomes were a really core part. They never stood alone. It was taken with clinical expertise, but that we have biases, we have faults, that we can miss things as a clinician. That's not unique to AI that we see the growth of these ideas. Right? How do we use data to do, predictions? We use it as a support to our own practice and to our own expertise, not as a stand alone. But that's not unique. That's not new information. It's also not new to automate tasks. Right? So I'll use the most simple example. We've all used twenty years ago, I absolutely still use vacation responders, right, when I was out on my email. That automation at the most simple level is not brand new with AI. We've probably all used different things as well. Also, computerized interventions for digital health. We've long had, again, for a few decades, digital interventions that do things like cognitive behavioral therapy that could even be somewhat interactive that we're not AI based. Right? So we see it we've these are not new ideas in our field. What is new is the speed at which this can happen. What is new is the capability to look across vast more patterns to be able to pull predictions and ideas that we haven't been able to pull in the past in the past, to process large amounts of information. And, again, to date myself, I can remember research in grad school. For example, if I wanted a transcription, and I did, I did trauma research, and I we would have trauma disclosures in different settings. We would literally sit with one of those foot pedals and transcribe as we went through, right, because there was no tool to automate it. Now this can happen in an instant. It can happen live, right, for anyone, who might even choose to be using it right now. We can see transcription as it comes through. That speed is a huge ad that we have with AI. It also is that capability for live, highly responsive interaction. All of these are big areas of growth, but I always part of the way I like to think about the world is always thinking where is this based, where is it connected, and how does it connect to our theories or approaches or things we've done in the past? So with that, we'll go to the next slide. So we're gonna talk across a few big areas. And in part, I broke down these big areas of how we might use AI in practice because I think they bring unique considerations, unique questions that we need to consider when we're thinking about what we need to know about the tool, how we might use it, what tools could even work, and what evidence we need to use them. So these categories are really based in that. So at at one level, and you can kind of think of this as as you go down the list, it's an increasing complexity of what you need to know and what needs proven about the tool in terms of safety, in terms of efficacy and effectiveness before you might consider implementing it. That is not to say that each level doesn't bring its own concerns. Right? We need to think through, exactly how we're using it and making sure that we're following the same standards all the way through, but it's a lot easier to uphold those standards of safety, of efficacy, and effectiveness the higher you are on this list at the top. So, for example, we're gonna talk about business management, access to care, clinical management, clinical monitoring and guidance, and clinical facing, clinical facing tools. So we'll go to the next slide. So let's first talk about business management. Like I said, even when we're talking about how AI has been used in health care, when we look at that APA survey of how clinicians were actually using AI in their practice already, administrative support is one of the biggest. And note here that I call this very first level business management. Right? This is not getting into the level of helping us with notes. This is kind of anything that is pre pre client information. Right? So there's a whole host of things that we can do that don't even necessarily require a special tool. Right? All of our the open access AI tool abilities that we have could be used in some of these business management ideas. So there might be things. It's you know, I always encourage think AI not just we don't integrate it into practice because we think it's new or exciting or there's some evidence for it. We start with what are we actually trying to solve for for ourselves, for our own practice. So if you are in evaluating your own needs, you realize things like help with organization of tasks or some content creation. You want help writing your marketing materials. You want help, evaluating things that you've written, being able to get kind of proofreading and feedback and all of that. Those are business management ways that you could use AI without getting something that has those higher levels of protection with HIPAA and all of that as long as you are staying away from identifiable information. So, for example, in preparation, I wanted to actually give it a try to see how it would work. I went through and was like, in my first, you know, in this first half of the year, first quarter of the year, I made x amount of money, on my practice. I intend to, continue to see x amount of clients throughout the remainder of the year at x value. And it was AI was able to give me a charting of here's what your likely costs are. The more information you give it, the more it can do. But it was able to chart out, like, here's what you likely could think about for gross income. These are ways you can do it without giving anything that personal. Right? You can think about the highest level numbers. And of course, these are all things none of this was wild levels of interpretation that I couldn't have done myself, but it was a really nice, quick way to get some structure. I've also tried in advance of presenting this, just asking in OpenAI even, help me organize my day. I'm seeing this many clients. I have to do this amount of documentation, and I want to do this amount of marketing. What are some recommendations? And can actually give you a chart of time. So there are lots of ways where we can think about AI. And, again, this is a chance you can think about even in the in the chat as I love when when you all are able to have that chance to share with each other ways that you've been doing it. As long as you are, again, on this important considerations, keeping information identifiable information out. Right? And everything I just disclosed that I tried it with, none of that included my client information. Might include some of my information. So one other way that I actually did actively use a AI in was creating marketing materials. I don't know if any of the rest of you are like me, but I really detest writing about myself. Detest it. What do I do? What's my expertise? Like, I can list it out. I can bullet it. But putting it into a nice, pretty package is, like, I don't know why, but incredibly painful. And if any of you are like me with that, that's another way in which that's a business management. There is nothing that touches my identifiable client information. Here's my approach to what I put in was literally a prompt that was, here's the, concerns client concerns that I focus on. Here's my approach to treatment, and here's a little bit about my background. And it was able to give me something. Was it something I would just use? Absolutely not. But it took the pain out where I could then be like, don't like that, don't like that, like this piece, move it around. And it took a lot of that mental weight of getting the words, the pen to paper. Right? So all of those are ways that you can think about doing it. Another consideration though, as I mentioned there, is understanding how those patterns are drawn. So in the example of asking it to forecast costs, looking really deeply about not just the total number that it gave me, but looking at how it breaks down those sections is really important so that you understand. The hardest thing with AI is we can't always peel back the layers to say, how was that decision made? How was that described? But as much as you can to be able to be like, how did this number get developed? Right? And to be able to look backwards. Another thing, would be, thinking about making sure that even in these business management ones that you don't trust AI as I AIs report. Right? So as I mentioned, even developing my own profiles, needing to go through and make sure that it actually matched my language, it fit what I actually describe myself as. So that was really important when I went through as well. As we go to the next level, on the next slide and thinking about things like access, I think that some of these things are like as we're thinking about really demanding tasks sometimes, which is thinking about how to manage incoming requests for service, translation of materials, if that's something that you need, thinking about initial screening and summarization. This starts once we're in this area. This This is where it's becoming personal information. This is where we're crossing the line into needing much more security and having much more heightened concern about what information we're giving and what trust we have in the tool to manage those things. So we can't without without knowing that a system is HIPAA compliant, put any patient information in an AI tool. And that's actually a really important thing. And sometimes we can think about, you know, the highest level is just taking names out. Taking names out isn't enough. Right? When we really think about what does it mean to make something de identified, that's the level at which you can think about potentially, using AI. But in all of these levels, you're adding those extra concerns. What oversight when thinking about tools like CRM management using AI, what oversight do you need to have? And where how does the tool work, and how do you think about balancing what it's adding to your practice alongside what oversight you still need to give it and where those corrections and errors are. And when will you take over the task, and what is it actually saving you is the next important part. Because I think one of the most important things, and this is definitely, my opinion and not a given across the field, is that we need to be honest once there's an interaction, once that email is not actually you writing that email, we need some level of transparency with the person receiving it even if they're not yet our patient, right, even if they're not yet our client. So being able to think about where that line is and where in your approach, right, how much that interaction and that piece is important to you. I will say I know many people who have used these type of tools to manage intake process, to be able to bring people in early. Some people find it really a huge value add. I will say it also comes with that extra layer of making sure that you're doing you're using the tools that are are safe for this type of use. And on to the next slide. As you can see, each of these get a little bit more intense when we're thinking about what do we need to know about the tool. And this one's about clinical management. So this is much more, again, distinguishing business management and clinical management. This is where we're talking more about things like those actual clinical burdens, like writing notes, writing treatment plans, clinical materials and resources, although there's probably some that could fit more on the business side with that, and thinking about reminders about clients. Those are all needs that we all have. Right? Being able to manage it. I am one of the biggest struggles. I don't I I wonder if some of you may relate to this with me. When I first thought about the idea of AI being involved in in notes, One of the concerns from just just thinking my clinical perspective is the value that I sometimes have in sitting and reflecting on my patient after a session and being able to put it in my own words. What strikes me as I've seen the growth of these is actually it does give you that chance to sit back and reflect, but that's a core part and you have to force yourself to still be an active participant in kind of reviewing, ensuring that it reflects your experience, that it reflects those things. That I often see it kind of like for any, of my fellow psychologists and for others who have heard about this tool as well and may have used it in other ways. I think about the way we used to think about testing and the MMPI. My grad school made us learn how to score by hand, not because we would ever score that by hand again. In fact, it brings in more errors, but because you wanna understand what though what is working behind the scenes, being able to have that expertise. So I think it's important to think about it from a clinical perspective of building the expertise and then remaining very tightly in that loop as you think about these tools. So these are documentation type tools. These are so any note writing assist, treatment plan writing assist, session reminders, summarization. And important considerations here are this is where it's actually touching client information fully. And transparency about the use of AI when it is something that it is your active client, active patient, and you are using it to record and to, to in some way interact with them. This is where those really major concerns about ensuring we have some transparency, ensuring that the tools are HIPAA and HITRUST certified, that you have an agreement in place with the tool, that you're using, and that there's security in the transfer of that information. So sometimes where we forget is, even if we're using tools in one place and we're transferring it to our notes in another place, do we have something that ensures security of that information in that transfer even? As we go to the next slide, it's thinking about things like clinical back just one. Clinical guidance. So here's another interesting way that AI is being used that is a little bit more, sometimes it involves patient information if it's about an active recording. Sometimes it doesn't. And it's about decisioning for the next steps. So this could be things like, you're stuck in with a certain client. I can I can think of many clients that I've been stuck with in the past, right, where you're like, this tool this approach to treatment just isn't working anymore? And it can be even searching to say, like, what other alternatives exist? What type of tools might I use? Being able to track patient progress. One of, I think, the really exciting things, and I've previously done some research in a more, like, detailed hands on way with, like, biometric data. How much sleep and having info about sleep can guide our understanding of depression and tracking depression with our clients. But I will tell you, having done research on it, it was painful to interpret that data. AI makes it possible, and there are tools out there that help interpret some of that really basic Garmin or Fitbit data and make it usable to us, as clinicians in tracking. And as a health psychologist, right, that was something really important to me is tracking those type of, movement when I've treated many individuals with chronic pain or sleep broadly as we know the interaction with mental health. There's also really cool tools that have been developed if you're taking a certain model approach, and you want feedback and supervision. Right? One of the hardest things in our field is once you finish your own training, it's really hard and an active hard choice sometimes to make to get consultation. And I think this is another way that we're seeing AI tools continue to develop either by watching those sessions or reading recaps of where you think something should go and then seeing what other alternatives are. So part of this treatment guidance is that it's all behind the scenes. Right? This is information given to you, not directly given to the client. And I think that some of the most important considerations to me, the minute that we touch next steps, that we touch diagnosis, is that clinicians have ownership of those decisions. I've been a huge again, to use a parallel in our field. I find it really stressful when we think about things like a PHQ nine or a PCL five measure being something that diagnosis. I'm always one of those reviewers on articles where I'm like, not a diagnosis. It's indicative of a potential diagnosis. I would say the same with any of these AI tools, which is at the end of the day, it's giving guidance. But as a clinician, we own those decisions. Right? Those are decisions when they're for your patient and for your client and how it can be additive, how it can give suggestions. But, ultimately, we are the one with the expertise in the room to make that final choice. And this is where we start thinking about, does the tool have methods to consider reductions and bias? What are the evidence for the effectiveness? So when we're starting to think about what data we need to support the use of a tool, it comes down to things once we're at something like clinical guidance and absolutely when we're at the next step. We need to know just like we expect there be to be data on why a certain treatment approach like DBT works in the field. We want data to support it. And the same way when we're thinking about these AI tools, we wanna know what data does, does the product have? Does is there research in the field supporting this? Because it is crossing that threshold into, like, what's our site clinical science behind it? And I think that requirement becomes a lot more intense even when we hit this step before the the next and final step. So on to the next slide, which this is again where I started in saying, let's think about AI broadly. I think this is the least used right now, by clinical practices, But it is something that's continuing to grow and important for us to be aware of the growth even if we aren't actively using or thinking about using these tools. But it's client facing clinical tools. It's things like, being able to use a tool, like some of these chatbots that are actually integrated into clinical practice. Right? So, again, I'm I, as a psychologist myself, see where is it that the clinician is still the central. And when I think about where there could be a future in some of these these tools are even one when we think about, you know, I've treated many individuals. Opioid use dependence was, like, my primary specialty. And when I think about the number who also came in with very disturbed sleep and additional issues, sometimes there could have been some value. And in fact, I sometimes referred to someone else when there were enough intense issues I was working with on opioid use and trauma that I'd have them see someone else for a very specific concern, a very specific, maybe, manualized approach to getting back on track with sleep. And I think there are potential, but there's also a lot of caution here. Right? This is this is the one where it crosses into we really need to know that these tools work. And these tools have not many have not crossed thresholds where we would say that any tool that we in therapeutic evidence would say we could use as clinicians as supported have crossed that threshold. You've probably seen some of the recent research, and there's been some really exciting research, on things like Therabot. But that present that entire study is not a same comparison to clinical practice. Right? So even when they were talking about things like alliance, I think there's a lot in how we go and define alliance. Right? There's the idea of thinking about how to does someone feel supported by a tool, again, is very different than I think how most of us as clinicians define what therapeutic alliance is. Therapeutic alliance means a lot to me. It means I'm agreeing on goals, tasks, and bonds that old but still tried and true method of thinking about alliance. It means being able to work through tough moments. It means those moments where I've asked clients to tell me things like, Tell me what I've done that makes you uncomfortable. Tell me something you don't like about me because I do a lot of emotional focus treatments that those interpersonal connections are not necessarily going to transfer. However, there are some very specific manualized things that may. And as I mentioned, I always try to think back where have we been as a field. And I do think there have been some prior examples when we think about CBT for addiction has been, for example, an online app for a long time. We've seen a lot of online apps. They've just been less responsive. But it does bring up real concerns about safety and how these tools are actually being used. It also, again, how integrated the the clinician is in the feedback. Do you get to hear everything that happens in between that tool and your client and get a report so that you can intervene as you still sit as the expert in it, or are you external to it? I think that is a really important and critical question to this once we're thinking about actually integrating it into practice. I also think, it's important to know how much you have choice over those, the directions that a tool might take. Is it on a very specific condition? Is it broader? Does it match your therapeutic approach? Does it not? What is then the clinical input into the development? Again, the least likely, but I would be amiss for not at least mentioning that when we think about the full spectrum of ways we think about AI touching practice. This is one. We'll go on to the next slide now. Just a few extra considerations. I think I've mentioned a few of these, so I'm gonna go through this quick, which is these client facing, clinical tools is to think, there's a really big difference between all of these direct to consumer applications or even non applications when we think about people just accessing AI to seek support versus clinic clinician managed applications where the clinician remains in the loop and the expert. There's promising and emerging. So in my in my, kind of continuing education, words that I often use, it's emerging evidence, but it's still limited evidence, especially when we think about rolling out from something being, having efficacy in small trials to thinking about the effectiveness in our community. Those safety concerns, those I don't think we're actually for how much we hear, we're not yet there where this is something that, we have enough evidence that it actually does lead to those same outcomes. We also know I think some of the important things that we have to grapple with as a field and I'm often thinking through is, again, where is that line between therapy and support? Because I think that's up to us to define it. Where is that line where something actually becomes treatment, becomes a medical intervention, becomes a psychotherapeutic intervention versus support? Where do where does AI in this context of client facing actually fit in the continuum of care? As I mentioned, 55 of people didn't get care in the past year, and some of those people may never walk in our door for a multitude of reasons. And are we thinking about it in that space versus, alternatives? And what protections and regulations do we need implemented, and what's our role as clinicians, as clinical experts about what those rules and regulations should be and where do we get to have those voices? Because I think that is really critical as well, that there is clinical involvement from the start and that the voice of clinicians is heard in being able to build those protections. Groupings, and, hopefully, you've had some chance to share with one another if there are tools that you're using. But now I wanna talk to share with one another if there are tools that you're using. But now I wanna talk very, broadly about how you might think about, using decisionals think about decisional support for whether you want to use AI in your practice and how you wanna use it. So I'm gonna share a couple different models. This first one, is really just thinking about what are some of those major, and I'm not saying this is exhaustive by any means, but these are four of the really big things that we need to consider when thinking about integrating AI into our practice. One is patient views on AI. So being able to really sit with our individual clients, patients, and discuss this, and see where they land. This will differ. So I shared with you that I used to work in forensic settings. My bar for forensic settings versus more health care settings where I was in primary care treating, chronic pain, my bar was different because my concerns about the privacy and what someone would be willing to talk with me about were different, and I needed to have different conversations if I were going to consider use of AI. And so it's not just individual patients. It's kind of broadly how you think about your individuals that you typically treat. Clinician views. This is an optional thing to bring things into your practice. Right? So you really need to start with where you land. Like I mentioned before, I think there's a big difference in the thoughtful, Here is what I need to fix in my practice. Here's what would really add value to me, to my clients versus, like, there's this new tool that can do things. It's being able to take that step back and say, where do I land? Where do I land, as I'll talk in a minute, about whether this needs to be my whole practice or this is parts of my practice? Thinking about the clinical impact and evidence, as I already started mentioning, as we went through those categories, that how much you need is really gonna differ based on what kind of tool we're talking about, but you still need to consider it even when you're thinking about something like a business tool. Right? And then regulations and ethics. I'll say our ethics boards have had a lot more, implication into how we think about this than regulations at this point. And at the end is, kind of cuts across all of these is thinking about the ease of integrating it into your practice and a feedback loop. And by the feedback loop, I mean you in that cycle continuously and always. Onto the next slide, which is just another view, with the Department of Health and Human Services when thinking about the types of health care tools that could be used and things that we should be evaluating and considering. And these major categories that they mentioned are whether it's fair, appropriate, valid, effective, and safe. So by fair, meaning that we've done everything we can, to try and reduce the bias. It's at least been addressed, whether it's ensuring a diverse, population was involved in the development, diverse let's even think about when we're talking about clinician tools, that diverse perspectives on clinical care are included, that it's appropriate, that it matches what the needs are of the clients you're serving. It's appropriate for the service that you're using it for, that it's valid. Right? So it actually does something that's meaningful. So when we think about something like a decisional tool on diagnosis, a decisional tool is not meaningful unless it has some validity out in the world that it matches with the way that we evaluate it in other ways. Right? So we need to be sure that it's really doing what we think it's doing. Right? That it's coming up with answers, that it's effective. It does what it says it does. It does it reliably and effectively and that it's safe, safe for you, safe for your patients, that there are protections put in place. So I think it's a helpful additional frame that I wanted to make sure to mention. Onto the next slide. And the last frame that I wanted to give in thinking through how might you consider different tools. You know, we talk about this sometimes in assessment of clinician concerns when we're thinking about behaviors. And it just struck me that the same model really works when we're thinking about how much oversight and how much evidence we need to be able to use a certain tool. So at one at level one is when we're doing things like behavioral, no interpretations made. Client walked in the room. They were crying. They said they were sad and distressed. Those are behavioral. You take what you see, what you heard, and you denote them. We do these in our notes all the time. We think about mental status. That is a level one. And as long as the AI tool is really, valid and effective at doing that, there is less oversight review still. Right? It never works, but less oversight needed. It's taking behavioral interpretation behavioral information without interpretation. The next level up is labeling and categorizing, but does not go beyond the surface. So this is when we say person walked in sad, crying, reporting sleep problems, irritability, all of these things, and we label it as depression. So when we're thinking about AI tools, again, that need for oversight increases the minute we step into this level too. It's making an interpretation on data. It's doing something that we would otherwise, need. We have training to be able to interpret and put it together. And level three is when interpretations and predictions are made on inputs. This person will do better with this type of treatment, needs this type of intervention. This needs said to this person in that moment. That's level three. That's the maximum. Right? So I think it's helpful again in breaking down the tool and being able to say, like, is it is it taking the step of making a diagnosis or interpretation? I need to be a little more careful in knowing how the tool makes these decisions and what they're doing, to ensure they're accurate. And I need to be closely involved versus making those very clear, like, this was said in session. Right? So many note tools, for example, don't go to that next step of choosing a diagnosis. They are much more level one. Next slide. So I'm not gonna go through all of these. What I bring this up for when thinking about, cognitive biases, I said one of the things that I wanted to make sure to come back to, you saw in that one the original data I presented from a national survey that said some people were actually hopeful that AI would reduce bias, in the health care system. The bias is real whether we're talking about any number of factors, of diversity. I'll even say it's really real when we talk about mental health. Right? From my own field of being in addiction, we know that providers spend less time with individual like, in a hospital, less time with individuals who are struggling with addiction. We have biases as humans, and those biases are very clear sometimes to people seeking care. We do a lot in mental health care to try and, combat them, but we're all still subject to a lot of judgments and quick decisions that we make. In fact, it's the whole reason, any of you that remember probably heard this. I think it's in every intro to psychology book. But the idea of, like, why do we have so many heuristics and judgments that we make so quickly? Because think about the the best example is when you've been driving some time and you're like, how do I get from a to b? Your mind went on autopilot. You were focused on a thought. You were focused on something outside, a song, whatever it might be. Yet you still somehow were able to know when something dangerous happened, things were going on. You can make quick snap judgments. Those quick snap judgments help us get through our day. They also lead to bias in our clinical practice, but they're natural, and we just have to be active in fighting them. So things like, judgmental heuristic that we're more likely to pathologize things that are not like ourselves. Availability and vividness, heuristic. This one hits us in clinical practice all the time. We've seen a really vivid case we remember. I'll think about I can think about a vivid case of a young man who came in, and it was emerging psychosis. I will remember him forever for some of those words he said to us of, like, am I real? I don't know that I'm really here. It's a vivid. So if I hear someone saying something like that, I may more quickly jump to memories of him and what his treatment course ended up being because it's a vivid memory, and I make those quick judgments. Things like confirmatory bias. Once we know someone, again, from my own, like, in training individuals in the past, thinking about when someone hears someone has used cocaine, for example, they're very quick to just jump to, like, well, of course, they meet for cocaine use, disorder, which is not necessarily true. There's lots of people who use lots of substances and don't meet for it. But there's you're looking for any bits and pieces of evidence even when it doesn't cross the line. Some of the things that we know from the literature that have been cited as influencing this is ignoring statistical information. I can think about a time I worked at a practice where almost everyone was diagnosed with bipolar. It wasn't bipolar specialty clinic, but it was, it went against the base rates of the disorder, right, of people seeking care for mental health. The base rate, how often bipolar happens versus depression, it's much lower. So we always need to second guess ourselves when we don't when data doesn't match what we believe. Limited availability to get feedback about the accuracy of our decisions. Once we're out in the field, very few people are giving us feedback, and, also, unavailable information is not always provided to us. Right? We have poor informants to use the terms that I've heard working in forensic settings, but we don't always get great information. We go to the next slide. There's a lot of potential that AI has in reducing some of those human based biases. Brings its own biases. But, again, trying to look at a balanced view of this, some things that AI can do to actually help us with that is looking for disconfirming evidence. One of the biggest things that we recommend in clinical practice and training is to be able to continue to look for those pieces that just don't match, to be aware of sources of bias and overpathologizing, so that if it truly is just looking at data patterns, it may give us some opportunity to double check those, decisions that we've made. Seeking feedback on the accuracy of decisions we've made, also a really valuable input where we maybe didn't consider when we think about the overlap of so many of our diagnoses and symptoms. We don't always think about something we haven't diagnosed as often. I think about a young woman that came in for an ADHD evaluation who actually, met criteria for obsessive compulsive personality disorder at the time. There were overlapping symptoms of not completing tasks and not turning things in, but I didn't it took me a while to get to the point that I noticed there could be something other than ADHD. So if we go to the next slide, just to note that AI also has its own biases, and this is a really important and core aspect to how we can think about tools becoming effective. The problem, although it can offer us all those data insights to things we're doing, it's a good second lens just like when we have a good consultation with a peer and they say, did you think about and you might not go with that alternative, but it makes you think about why you came to the decision. But AI itself is based on data that has bias in it because our field has bias. Right? Because language use, because of things that we use that overpathologize sometimes disability in certain groups and certain individuals, overpathologize certain symptoms and concerns. There's underrepresentation. So one thing we know about the data with AI, tools that have been built, research has shown pretty clearly that prediction of depression, something with a really high base rate, meaning it happens a whole lot, is much better by AI than something like, schizophrenia diagnosis, which is much more rare and much harder to accurately predict. The data is not as clear. It's harder to make those predictions, And AI carries that same bias that we might have. Though as a clinician, we may notice some of those signs that bring an extra concern in a different way. Underrepresentation of diversity in datasets in the way that we've looked at. I mean, if we even look at we're not many years out that even women were required to be in clinical trials. Right? So when we think about the diversity of data in our field, the diversity is quite low. And for individuals who when we're thinking about the use of AI as people are building tools, are they including diverse individuals as they create them? And the problem that there is a black box to the processes that happen with AI, we can't always understand what's happening. And because we can't understand, we can't critically evaluate some of those pieces. What can we do to overcome is think again as we come back to federal regulations, thinking about where bias might might be required, recommended, thinking about diverse datasets, bias assessments in development. Are people actually going in and seeing if that bias is there? And transparency on whether those, whether those bias mitigation things are even happening in the practice of the development. As we go to the next slide, so I'm just putting this back up one more time because we're now gonna jump into each of these very specific areas. So we can go on to the next slide, which is first thinking about clinician views on AI. So a few considerations. When we are thinking about, AI for ourselves, AI is an additive tool. It's not essential. This is an active choice that you get to make about whether some of these tools are integrated into your practice. And it's important to consider your beliefs and your knowledge that you can make an accurate, informed decision. So taking the time to read about the tools, taking the time to read about what concerns with the tools is a really essential part of that process. Not everyone will wanna use it for valid reasons, but, and some people, though, may find it really effective for their practice, but might need to vary it across how they implement in different settings in with unique patient experiences. And you actually have the right to also I've heard some individuals make that choice to say, I'm actually an AI first practice when it comes to my notes, for example. And for any new clients, part of my agreement is this is what I do, and it would actually be one of those things that you might not take in clients that don't fit that. For other people, this is use it for some and don't use it for others, but you get to make those decisions for your practice and how you intend to implement. Some of the important things to consider are whether you you know, going through your own evaluation of where AI fits in mental health and your concerns, but also thinking if you have any champions in your community you can look to. Whether it's within your practice if you're in a group, whether it's other people in the community, but looking at people who have taken those steps and what you can learn from them and how they have evaluated their tools as well. Next slide. The next step is really thinking about your client, your patient perspectives about AI. I added a little bit here because I think one of the really important things is not just asking how they feel about you using AI in practice, but really understanding what's their use of AI right now. One of the thing it's one of those things that as we continue to learn more, we know that people are using these tools in different ways. So being able to start a conversation when you're bringing people into your practice to say, are you using AI? Why are you using AI? And And as we think about concerning use, there's been some interesting, research that, again, to align with my field in addiction, but I think a really interesting parallel that some of the problem at signs for problematic use are people who are seeking support from AI because they don't have other social interactions. The other concerning sign that people have found in research is when people are seeking AI as support rather than something fun or additive to their experience. Those same kind of things hold true in a lot of mental health diagnoses. Right? That, like, if you're not using it because, like, an active choice because it sounds fun and additive, but instead you're using it to relieve distress, You're using it in ways, because you're lacking social interaction. It can actually become more harmful. You can become more dependent on the use of it, and it can actually lead to social, problems. And I think we'll continue to see a lot more research coming out in this area, but I think those are some interesting early facts. Whether they're able to put boundaries on it, just like we talk about social media use, we can use some of that same things that you might have been using if you see youth or young adults about how they use social media. Can they put limits? Can they walk away? Could they take it off their phone for a couple days? Can they test themselves in these things? Are there any signs of concerning use? And how much those lines are blurring between AI and social situations? And if we go to the next slide, a little bit more about actually talking with your clients about the use of AI in your own practice. There's no explicit requirement federally for consent to AI, but many of our epic principle, across fields do say that informed consent is important. That doesn't mean necessarily a formal signed form. What it means is transparency that you have had that discussion that if the client is interacting or their information is interacting with AI directly, that you have those conversations. So it's really important to say, what is going in to AI? How are you using it? And having the conversation about why you're using it. I think those are important questions clients often want from us. Again, the same thing that I've had from when I've talked with clients about implementing, like, measurement into practice. They wanna know why do you wanna know this. What is your role as a clinician in making the ultimate decision? And what is your use of it afterwards? Sometimes people's biggest fear is actually that you're gonna watch that recording. They're less concerned that you're using AI to help you make a note than they are actually about the thought that, like, are you gonna sit and listen to us again and and critically evaluate us? So it's important to understand where their actual concerns come from, but allowing that space for what why you're choosing to do this, how you're using it, and what protections you've put in place. Next slide. So starting with client perceptions, describe your own purpose for use. You can think of this as some of the kind of general checklist. Describe the limits of use. Are there times you won't use it? Could a client and I think about this in the past with recording sessions. I remember one time a client saying something really terrible happened to me, and it was in something where it was a clinical trial of a treatment. And so all sessions were recorded. He's like, I cannot talk about this on audio. I I can only talk about this with you. Letting clients know upfront, will there be exceptions where you would say, like, hey. Actually, in this case, in this session, we're not gonna use it. Can we use it in other sessions? But being clear where your line is, again, it's your line to make, right, for your practice, but it needs to be talked about and transparent with your clients. Describe how you selected the tool, what process you went through, share information you have, be explicit that you're in the loop, and consider where based on your state laws you might need explicit if you're using it for documentation, for example, or other purposes like supervision, consent for recording, and what your ethic principles say about that transparency. Next slide. So a little bit more about that, third bubble, which is clinical impact and evidence. So evidence is growing in all of these areas. I would say client facing clinical tools. As I mentioned, there's been a lot of exciting research, but not quite the line where we'd say, like, public access, right, for all of the tools. But it's growing, and it's growing fast. But what I wanted to help think through here was what kind of evidence should you be looking for before you choose a tool and decide to use it in your practice? So as you see from top to bottom, there is always a need to ensure things are accurate, whether you're using it for business management, like I mentioned, helping to write my profile that I then edited, whether it was when I've taken tools I wanted to use with a client and said, like, can you make this into a better worksheet? I want to assess this, this, and this. I am responsible for making sure it's accurate, but I also wanna know the tool's accurate and has some degree of accuracy. That's across all of these tools. When we're thinking about bias assessment and safety rules and what evidence we have in the literature to support those things, that becomes even more important once we get to some of those clinical management pieces and beyond. And finally, it's once we get to clinical monitoring, client facing tools that we really need to think about the effectiveness of the tool. Has it been shown to actually lead to improvement? Has it been shown, safety? And you have the right to ask for these things when you're thinking about a tool. Onto the next slide. And that final bubble, regulatory and ethical considerations. So federal considerations, I would say, as we've talked about a little bit, there are not many strict rules regulating right now. Over 80% in the that same Pew Research study, 80% of US adults believe additional regulation for AI in health care in particular is needed. There are some very, some rules that are nonspecific to AI that do impact AI, though, and the use of AI, which is information about safety and security. So things like HIPAA, having BAAs, having high-tech certification, thinking about, patients' information, access. So being able to ensure that whatever AI tool you use gives you timely information that could be shared if a patient requested and it's something formally part of their record. Right? That claims about what AI can do cannot be misleading or unsubstantiated. So if they're saying that this tool can cure people of OCD, they need evidence that that can cure people, which is unlikely, right, in the language I use. So they can't be unsubstantiated claims. And that's true. All of these things, though not specific to AI, are really important. Also, when we think about AI specific rules, we can think about its very limited use right now, but some AI in health care needs, FDA approval as a medical device. There is some accountability recommended, proposed for how algorithms are used. And if we look back at the 2023 executive order, we can think about some of the rules that were suggested there as well about transparency, risk assessment, and beyond. We can go to the next slide, which is just some ethical considerations. So I wanted to make sure this across different fields was relevant. So I just wanna highlight a few of the areas that we should think about in our ethics when considering, which is confidentiality and privacy, of course, is one of the number one things that we hold, but also one of the number one things we wanna make sure that AI is upholding when we use it. Informed and transparent that we let patients patients are informed about their care, whether it's AI or just in general. Right? So this ethical consideration is big. Competence. As a practitioner, we only use tools that we are competent to use. So that goes back to gaining the knowledge and the tools you wanna use and how they work before you implement them in practice. Next slide. Beneficence and non maleficence, which is that AI is an enhancement and does not lead to a replacement or harm to the client, so that we're doing things that actually benefit them. Integrity and honesty, that we must not misrepresent AI's capabilities and have to be clear and honest about the limitations of the tools that we use. And finally, justice and well, not finally, but of the major ones, justice and fair access that we need to ensure that we've done what we can to understand biases used in our tools. Next slide. So I will very quickly go through a few of these. These are more thought ideas for yourself. So I'm gonna go through them quick to save time for our question and answer. So first case example, Sue always finds herself sketching out a homework assignment by hand with patients at the end of sessions. I'm Sue. I recreated the same form many times. And she decides to give AI a try to help her build out worksheets. So on each of these, what I ask you to consider are what are some of the unique considerations for this use, what should the therapist consider in choosing the appropriate tool to use, and what unique considerations are there? If we go to the next slide, and you'll see in the handouts, there's some of my own ideas. Is she using any PHI in the building? Will she carefully review for errors? These are all important things as we've talked about in the decisioning. If we go to the next case example, Jim can't keep up with the emails coming into his practice. He's considered hiring someone to manage emails, but can't afford it in his practice right now. He found out about a new AI tool that can auto respond to incoming emails responsively. And if we go to the same questions, again, encourage you to think for yourself, what other questions you'd have. But I threw out some of the first ones top of mind for me. What type of emails is he receiving? Can he use the tool for some messages and not others? These are all important parts of that process. And in the final case example, Jill notices that many of our clients are having difficulties practicing activities that were discussed in session while she was at home in the hundred and sixty seven of hundred and sixty eight hours. She's considering an AI tool that, her clients can use on her phone that will provide her weekly feedback on what they've been doing at home. Again, these are raising an intensity and what we expect. So if we go to the next slide, you can see some of the questions that I'd consider. What research is there on the safety and effectiveness of the tool? What feedback? How much is Jill in the loop? How much control does she have? And the next slide. So final, final little section, which is steps to implementing AI in practice. So just some broad ideas, which is, one, decide what areas of practice you wanna focus on. Look for champions that might help you. Two, ensure you have evaluated the appropriateness of the tool for your use on all of those guidelines we just talked through. If it's client facing, when feasible, pilot it with yourself. Try it out. Try out that session or with your colleagues before you use it with patients. Determine if you'll use it broadly or with select patients. Make that step to start those conversations with your patients when ready. Maintain close oversight and reevaluate the value of the tool regularly. It is important that we maintain that ownership over what is coming out, and important to stay on on, stay up to date on changing policies and guidance with it. With that, we'll switch to some q and a. So, I so the first question from Jennifer, I did not see that APA is listed. Is CE credit not approved by the APA? So we do not currently have sponsor approval with the APA. That is why, we recommend looking at your state board of whether any of the other license types, may work for this credit in particular. But thank you for that question. Can transcripts be subpoenaed? This is a really great question when thinking about tools that you're using. So, what I would say is this is when we're thinking about this broadly, about when you you have a transcript of your session. So in theory, if you're maintaining this on your own records and you're maintaining that transcript and keeping it indefinitely, There is risk of subpoena for those type of materials. Right? So when you're thinking about tools that are creating, those AI transcripts and, again, this is part of it is how you're defining these and what what you exactly are required to disclose because there's definite layers. Right? I'm sure we've all I've had plenty of subpoenas where I could give a summary of care, and I did not have to give all of those additional client detail notes. Also, we have the right to maintain separate records that are not part of the our our core record of our clients, and you can separate those pieces as well. The other way that people often think about it is if you de identify it and detach it from any record of that individual, that's no longer subpoena I'm gonna make up words, subpoenaable, because it's no longer if it's truly de identified, not just, not just that it's followed the rules of taking out name and date of birth, but truly de identified, that can no longer be attached to that client. Right? So those records are not you cannot have those subpoenaed because they're no longer attached. They're no longer part of a client record. But that's a great question. What organization creates standards for AI and mental health? Are the standards viable to read? This is a wonderful question. Thank you, Regina. So when we're thinking about the layers of who's creating those standards, we can kinda think through some of that grouping that I just started to mention, which is, one, we can think about what are the federal rules. And when we think about federal rules, there's very, there's no specific to mental health right now regulations on the standards for how it should be implemented. There are recommendations. There are in some of those things that I listed on that slide as well, there's guidance on how we should think about it. There's guidance when we think about that FAVES model, that I mentioned from the Department of Health and Human Services. So there's a lot of guidance, in those ways of how we evaluate it. And those are all those are all things that you can you can you can search those names and find additional information to read of how it is. When I think about standards, though, and what I wonder if part of the question so there's, like, those regulation level and then there's within our own organization. So myself as an APA member, right, others, ACA, other groups have started to develop their own kind of approach or position on AI and how we should think about it in the standard of practice. Right? So when I think about all the bubbles that affect us, federal regulations, our state regulations, our licensing regulations, and then those standards that we see coming out of our organizations. All of those are we have you know, we belong to those organizations, follow those standards, those ethics, those everything that's built around it, they're a really important resource in building it out. I also think there's, you know, continuing to be more and more voices. So, for example, last year, I did not have this as one of the citations, but you should be able to find it based on my description here, which is, for example, there was an entire position statement paper, developed by a group of individuals, in health care and mental health care around reduction of bias in our AI tools. And there's an entire standard of practice there as well. But what you'll see is there's bits and pieces kind of like how I talked about. There's bits of regulation that are nonspecific to AI, but that we have to follow. And then there's things that narrow more and more to our own practice, what state we're in, what, our licensing body says, and what each of those pieces are. I think, as I mentioned in the presentation, there's a lot of room in this area where we can continue to, look for ways to advocate and build voices and build a uniform voice. If anything, what I often see happening is we have a lot we know we need to manage AI fast in our field, and that means we see a lot of recommendations popping up, a lot of takes on it, a lot of different views on how we can and should use it. But being able to come together in a unified voice is something that I think we actually still we we're still waiting and and looking for kind of a clear unified voice on AI in mental health care as well. The biggest barrier, this is from Lavina. The biggest barrier for me using AI for notes is concern about clients having negative reactions to the request, the informed consent process. Is there a good sample script I can use to propose AI to clients when I am the one who stands to gain while they assume the risk discomfort? I think that's a wonderful question, and I do think that this is not an uncommon experience in lots of things. I think mental health care, we have it a little less maybe than some health care settings. But there's lots of times where we're doing things that we do it because this is our procedure. This is it benefits us. The direct impact to the client isn't as clear or it's secondary. Right? So there there may be impacts to our clients if we choose to use an AI note taking tool. Those benefits would be things like you being able to stay more focused kind of in the moment. It could be things about kind of reducing burden that allows other things, but it's not always that. Right? Sometimes it really is just helping us get through those moments and that time. I would say, I can actually I'll I'll share part of my list in that kind of, like, what do you talk through with a client, was from a checklist that I had made for myself at one point, and I will happily, share that post webinar, with the documents, so that you have access as well. But I do think that it is being clear that it starts with the conversation. What are their fears? Because I think sometimes the barrier, like I said, isn't even AI as much as for a client as much as it's recording. And I've hesitated on recordings before sometimes even though I had to do it a lot in, in my career so far. It can be really uncomfortable to talk with clients. And usually, I'm more surprised than not that they're like, okay, whatever, if it's for me and it's for my purpose. So I think being able to talk through, like, what is the fear? Why are you using it? I do think it's important. Even if the benefit is to you, I think clients wanna know. Right? And because many of them, they they don't want it to be harder for you if there are potential answers. And then being able to talk through, like, here's what it means. And some of this isn't in that checklist because it'll be specific to the tool. Here's how the tool works. I'll get the transcript, and then I won't maintain that transcript, for example. And it will be stored, but not with your information. I won't review your video recording because that's not part of my practice, but it's being very explicit in the things that you know that you had questions about, being able to answer your clients with those same things and giving the space. And I think one of the other important things I know I have struggled with this, so I say this more as I know I should do it, and I don't necessarily always do it for myself either, which is to step back and and, like, really be like, so this is a, you know, this is a tool I'm going to use. Like, here's the boundaries of where I see that it's beneficial. But we need our clients who are our current clients to come on board. That's a lot easier when someone's introducing to your practice and you can say, I'm AI first. Once you have a client, I think being able to just be very clear on why you want to use this And if there are valid concerns, there may be. They may be sharing things. I can think about many clients who have shared very specific events with me working in the forensic area that wouldn't have wanted that. So, anyway, the answer is I don't have a specific external place to send you, but I will share from that kind of that slide where I talk through, like, talk through this step and this step and this step, what it looks like in a checklist. So I will add that. So thank you so much for bringing that question up. All right. Well, thank you. Unfortunately, that was our last question that we can get to today. But I really appreciate you all being here. Really appreciate you taking the time to join. And please, as a couple last reminders, remember that you can download the slides, from the document section. You can download the references from that document section. Also, if you are seeking a CE certificate of attendance, please make sure, you've been with us this morning. So step one is done. If you are watching this on recording, please ensure that you watch it all the way through, and follow through the link. You can see it, across your screen right now. It'll take you to a board, where you can complete your survey, your quiz. If for any reason you don't pass the quiz the first time, you are able to go back through, and there is guidance within that form to be able to do so to receive that one point five hours. And, certificates won't be sent today, so I just wanna make sure, though, that expectation's there. It'll take a little bit of time. We wanna ensure they are fully accurate. We have all the information we need. If you do have questions, you can reach out to us, but I really appreciate you all being here. Hope you have a wonderful rest of your morning or afternoon depending on where you're sitting, in The US here today. But great to have you all, and thank you again.