Using AI Medical Scribes safely

It’s a brutal fact of clinical life: clinicians on the frontline are burned out by the administration and documentation burden. Clinicians spend more time on admin than they do with patients, resulting in mass dissatisfaction and high exit rates from the profession. Technology like electronic medical records and dictation moving from the dictaphone to the computer has hindered more than helped. Despite valiant efforts from secretarial and admin support staff, there has been little relief in sight. 

The system feels broken. Thankfully, help is finally on the way.

With the advent of generative AI has come a new generation of technology that actually could help those on the clinical frontline: AI medical scribes. Now a clinician could have an AI medical scribe ambiently listening during their consults to rapidly produce the required notes and documents. Physician groups, healthcare academics, consulting firms, and even public health authorities are promoting AI medical scribes as a potential solution to address health worker burnout and improve the efficiency of healthcare systems.

But, is it safe? 

We understand why clinicians are concerned. The clinician and patient interaction is the most sensitive and sacred piece of medicine. It can be hard to understand how AI products work and how to safeguard yourself and your patients. At Heidi, we believe in having transparent conversations about what AI medical scribes are, how they work, what they can do and how to use them safely. Every day we hear from clinicians who tell us that using Heidi has not just helped them but actually changed their lives for the better. We want to help you feel confident in using an AI scribe so you can combat burnout and reclaim your time. Let’s dive in.

What is an AI scribe?

By utilizing ambient listening technology, an AI medical scribe “sits in” on patient interviews and transcribing the conversation in the visit or meeting its hearing. The practitioner then prompts the scribe to automatically generate structured clinical documents such as progress notes, referral letters or assessment reports using structured templates and preferences. This removes the need for clinicians or secretaries to take notes in real time during the interactions, simply generating them with a tool like Heidi at the end.

Clinicians must still review and edit AI-generated documents. However, because they are no longer writing notes from scratch, administrative burden—a key contributor to health worker burnout and systemic inefficiency—can be vastly reduced.

For examples of real-world results, we found that solo practitioners using Heidi’s AI medical scribe are saving up to 2 hours on documentation time each day. In clinic settings, some of our customers have reduced charting time by 70% and recouped over $10,000 in lost clinical time in just 12 weeks.

Ok, but how should I use an AI medical scribe like Heidi?

Clinicians don’t want decision making or medical assessment taken away from them and that’s not what AI scribes or Heidi is intended to do. AI medical scribes are a partner to the clinician, not a replacement. You can find Heidi’s intended use here. Simply put, Heidi is here to help you. Think of Heidi, or any other medical AI scribe, as an assistant or resident to assist you in writing the plethora of clinical documentation demanded of you. Any documentation created will always require your review, edit and submission.

Ultimately, much of the software and technology available today can cause harm if it’s misused (recording devices, AI image generators, and online communication tools are examples). There are even countless examples of medical devices and products that are therapeutic when used correctly, yet harmful in certain circumstances. As long as you continue to take responsibility for your documentation and ensure the content of the notes and documents accurately reflects the patient interaction, you can use AI medical scribes safely.

Wait! How good are these draft notes and documents? I’m worried I’ll be spending more time editing than if I had just written the note or document myself.

Frankly, this is an ongoing challenge for every AI medical scribe: producing a high quality document draft which requires minimal edits from the clinician. Every clinician should test AI scribes in real world scenarios to determine their efficacy and how much time they really save you. This can be as simple as role playing a few consults in the lunch room or over the kitchen table, anything which helps you gain confidence this AI medical scribe will help you.

For instance, some AI scribes struggle in specific medical settings such as inpatient or don’t produce high quality enough documents for the clinician to rely on. In fact, some large hospital systems are only seeing 30% of their clinicians adopt the AI scribe purchased because it’s so ineffective in their setting. 

Heidi has been designed to be used in every medical setting, specialty and scenario, resulting in only one in a thousand draft notes receiving a negative rating. 

The six most common concerns and how we think about them

Below are questions about safety and risk that clinicians and administrators commonly ask about Heidi.

  1. Patient safety and clinical accuracy

What patient safety issues should clinicians be aware of while using an AI medical scribe?

The main patient safety issue with AI medical scribes is documentation errors.

These errors may take the form of:

  • Overt errors in transcribing symptoms, medicines, conditions, and medical history
  • Omissions of important information (including non-verbal or implied details)
  • Unrelated information being added to documentation (as a result of “hallucinations” in large language models)

You can mitigate this risk by reviewing all AI-generated documentation for accuracy before it is shared or added to a patient’s health record.

How often do AI medical scribes make errors?

AI scribing is revolutionizing clinical documentation, but it's crucial to understand its role as a supportive tool rather than a replacement for clinician expertise. Like human scribes or residents, AI scribes produce draft notes that require clinician review, edit and approval. The goal is to enhance efficiency without compromising accuracy or quality.

At Heidi, our mission is to support clinicians in creating high-quality clinical notes quickly and easily. This approach is especially important given the current state of manual clinical documentation, where surveys suggest that approximately 50% of electronic health records (EHRs) contain errors, and 6.5% of patients identify mistakes in their records when given the opportunity to review them.

When evaluating AI scribing tools, it's important to look beyond simplistic metrics like "word error rate." While some vendors boast very low error rates, these figures may not accurately reflect real-world performance. A low word error rate doesn't necessarily equate to a high-quality clinical note. The true measure of an AI scribe's effectiveness is its ability to capture the essence of the consultation, including key medical information, next steps, and the clinician's overall assessment. For instance, only 1 in 1000 notes created from Heidi’s scribing receive a negative quality rating. 

Our expectation at Heidi is that by reducing the burden of note-taking, high-quality AI medical scribes will significantly improve both the quality and accuracy of clinical documentation. However, we emphasize that clinician oversight remains essential. Every clinician should view AI medical scribing as a tool to produce better notes in less time, not as a standalone solution. 

How does Heidi ensure patient safety and clinical accuracy in AI-generated documentation?

Heidi's quality assurance and risk mitigation strategy involves multiple layers of oversight and continuous improvement. 

The Medical Knowledge team regularly reviews generated clinical documentation for accuracy and quality. Our AI models and prompts undergo regular testing, updates, and maintenance to maintain high-quality output. We have not and do not use the patient-identifiable data from our sessions to train Heidi’s models.

We also have processes to facilitate regular user communication and feedback to ensure Heidi produces content that meets the accuracy and quality standards required by our users. Any concerns identified through this process are actively addressed. Clinicians using Heidi receive support, education, and advice on best practices based on the Clinician Terms of Use. 

The Compliance, Development, Security, Operations, and Medical Knowledge teams collaboratively oversee AI risk assessments, creating a comprehensive approach to maintaining the integrity and safety of the AI-assisted documentation process.

  1. Privacy and data security

How does Heidi protect privacy and data security, especially sensitive patient data?

Heidi employs a comprehensive, multi-layered security strategy to prevent data breaches, malicious attacks, and unauthorized access.

Key features of our privacy and data security strategy include:

  • Non-retention policies: Where any data is shared with third-party vendors (Kinde, Stripe etc.) we enter into data processing agreements that ensure no data of any kind is stored.
  • Pseudonymisation: Where data is shared with third-party vendors that may contain patient health information (PHI) we employ pseudonymization techniques that replace the name, age, date of birth and addresses with anonymous “Jane Doe” equivalents.
  • Separation of patient identifiers at rest: At rest and in storage, patient identifiers in transcripts are stored separately from the deidentified transcripts in siloed databases. In the unlikely event of a breach attackers would only be able to find deidentified transcripts.
  • ​Encryption: All patient data is encrypted during transmission and while stored. We use advanced encryption standards to ensure the confidentiality and integrity of all patient data. 
  • Access Controls: Access to sensitive data is strictly controlled. We use robust authentication mechanisms to ensure that only authorized personnel can access patient information with explicit consent for the purpose of troubleshooting.
  • Regular Audits and Compliance: Our systems are regularly audited for vulnerabilities and compliance with healthcare regulations. This ensures ongoing protection and adherence to industry standards. 
  • Secondary Usage:  We want to make it absolutely clear that we will never use your (or your patients’) data for any purpose outside of the specific terms listed in our privacy policy. We do not and will never engage in any unauthorized secondary use of data.
  • Continuous Monitoring: We employ state-of-the-art monitoring tools to detect and respond to potential security threats in real-time.​ 

Additional information on this topic can be found on our Safety page.

What does data encryption actually mean?

Within Heidi, data encryption occurs during transmission and at rest.

Encryption during transmission means your data is turned into a secret code while it’s being sent over the internet, so it cannot be read if intercepted.

Encryption at rest means your data is stored in a coded form on a device or server, keeping it safe even when it’s not being used.

Does Heidi comply with general and healthcare-specific privacy and data regulations?

We’re fully compliant with the most stringent privacy and data protection regulations across all the regions we serve - including the General Data Protection Regulation (GDPR) for the UK & EU, Health Insurance Portability and Accountability Act (HIPAA) in the USA; as well as Australia’s Privacy Principles (APP), New Zealand’s Privacy Act, and Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada.

Beyond regulatory compliance, we also are certified to globally recognised information management standards like ISO 27001:2022, SOC2, and Cyber Essentials, demonstrating our systems are secure and your information is protected at every level. We’re proud to also meet the high compliance standards required by the National Health Service (NHS) in the UK - our commitment to data security matters across all markets.

We have invested heavily in achieving global and regional privacy and data security certifications to ensure clinicians, patients, and organizations are protected while using our product.

You can view certification badges relevant to your country on our Safety page.

Does Heidi share my data with third parties?

At Heidi, we take the confidentiality and security of your data extremely seriously.

We will never engage in the sale of your personal, professional, or medical data under any circumstances.

We also don’t share your data with third parties for marketing purposes. Any product or marketing emails you receive about Heidi will come directly from us and you can unsubscribe from these at any time.

If I use the free version of Heidi, will you sell or use my data?

A statement often made about free software products is, “If it’s free, then you’re the product.”

We can’t speak for other companies. But with Heidi Health, privacy and data security practices are exactly the same for free and paid users.

We do not and will not sell any data to third parties—ever. We also don’t permanently store your data, meaning even if the company is bought, the “buyer” won’t have access to your data. 

So why do we offer Heidi for free?

The most powerful way for a clinician to see how Heidi can help them is to use it in their day-to-day life. By offering free but limited use of the product, clinicians can build trust that Heidi is the right medical AI scribe for them. From there, we see clinicians buying access to other Heidi features and telling other clinicians how much Heidi has helped them. Heidi’s growth is powered by clinicians - tens of thousands across 50 countries use Heidi in millions of consults every month - none of which would not be possible without offering some free use of the product.

Will any staff within Heidi have access to my data?

The only instance in which an employee of Heidi will have access to your data is for the purpose of troubleshooting. This would only be at your request and with your consent. All actions undertaken are logged to ensure a clear audit trail so you remain confident in your patients’ safety.

  1. Legal and ethical issues

Are there any legal issues I need to be aware of when using an AI medical scribe?

Legal issues you should be aware of when using an AI medical scribe include:

  • Liability for clinical errors. While using an AI medical scribe, the clinician still holds full responsibility for the quality of the documentation they produce and the quality of care they deliver.
  • Patient consent requirements. Legal requirements involving patient consent and recording laws may vary by practice location. At a minimum, we recommend always seeking verbal and/or written consent from a patient each time an AI medical scribe is used for a clinical encounter.

How you choose to ask for consent depends on how you work with your patients and remains your decision. We have heard from our clinicians that some doctors include a disclaimer in their new patient intake forms, ask for permission on the walk from the waiting room to the consult room, display a Heidi sticker or poster in public areas to disclose the use of a scribe for consults - all of which enables a patient to opt out before the consult begins.

Are AI medical scribes racially or culturally biased?

Large language models (LLMs) power many of the core functions of AI medical scribes. Because these models have been trained on existing data, there is some understandable concern that they may maintain and possibly exacerbate the cultural influences underpinning inequalities in the social determinants of health.

Heidi does not provide any clinical decision support which protects against the influence of any biases in LLMs. As an extra safeguard, Heidi’s quality assurance and risk mitigation team monitors for and addresses instances of model bias. As with any note-taking software, your review, edit and approval is the key. Every clinician is in the loop and should be vigilant about identifying, correcting, and reporting any terminology or behavior that might indicate subtle or overt cultural bias in AI-generated documentation. 

How does Heidi address legal and ethical considerations related to using AI medical scribes?

Our primary lawful basis for processing data is consent from users and delivering our services to you. 

Before any data is processed, users are informed about the types of data being collected, the purposes of collecting data, and guidelines for data processing. 

Consent is documented and can be withdrawn by the user at any time. Users are prompted to seek patient consent before scribing each encounter.

Heidi’s AI medical scribe processes user data to document medical interactions, ensuring that healthcare providers can deliver medical services as agreed upon in their patient care contracts. This processing is essential for fulfilling the obligations of providing accurate and timely medical records and support that is directly linked to the patient’s healthcare.

Heidi Health maintains clear protocols that ensure accountability for clinical documentation is well-defined. Clinicians are regularly reminded of their responsibility to review and confirm the accuracy of all notes.

Why can’t I find Heidi on the register of the appropriate regulatory bodies?  

Heidi is an AI-based software tool designed to reduce the time clinicians spend on administrative tasks like writing notes, assessments, and letters. As Heidi is not designed to help diagnose, treat, or prevent illness, disability, or injury, it is not considered a medical device and falls outside the scope of bodies including the Australian Therapeutic Goods Administration, the USA Food and Drug Administration, the Canada Medical Devices Regulation,UK Medical Devices Regulations or the NZ Medicines Regulations. 

We consistently monitor the always evolving regulatory landscape and the role of AI in Healthcare, specifically around scribes. If adjustments occur in law to include AI scribes, then rest assured we’ll do everything in our power to make sure that Heidi stays fully compliant.

  1. Workforce impact

What negative impacts could AI medical scribes have on the clinical workforce?

As we’ve discussed, Heidi’s intended use is as an assistant or resident to assist you in writing the plethora of clinical documentation demanded of you. Any documentation created will always require your review, edit and submission.

Using Heidi as intended reduces the potential negative workforce impacts of AI medical scribes including:

  • Inappropriate delegation of tasks to AI (such as clinical decision-making)
  • Over-reliance on AI eroding clinical or clinical-adjacent skills (such as the clinical reasoning that occurs while writing notes)
  • Creation of low-value work for clinicians (such as fact-checking notes and documentation)

The above risks are important and must be monitored. The best way to negate these potential negative impacts is to use Heidi as intended: a tool to draft clinical documentation for your review, edit and approval.

How does Heidi manage clinical workforce risks?

Several controls are in place to mitigate the risk of over-reliance and/or inappropriate use of Heidi without adequate clinical verification, such as:

  • In-product reminders and warnings emphasizing the necessity for clinicians to check and verify documentation before transferring it to electronic medical records or other software
  • Training and ongoing support on best practices when using Heidi
  • Safeguards to prevent Heidi from making any clinical decisions or providing information from outside the healthcare encounter that is being processed

Most workforce risks are mitigated by ensuring clinicians follow the intended use guidelines of Heidi. We endeavor to do everything possible to prevent inappropriate use of our AI medical scribe. Employers, licensing bodies, and individual clinicians are also important stakeholders regarding workforce risks and we welcome their involvement in these important conversations.

  1. Regulatory challenges

How can regulations control AI medical scribes that can “self-evolve”?

New large language models (LLMs; the AI technology underpinning Heidi) are trained regularly to keep their knowledge up to date with strict versioning in place from vendors.  Continuous development of AI is a core characteristic that makes them continue to be useful. However, responsible AI practices—like those implemented at Heidi—ensure that models are tightly controlled and monitored.

While our AI medical scribe will improve with versions over time, it only does so under our strict frameworks and guidelines. This means model evolution is predictable and can be controlled by our team. As a result, we can adjust our product to align with any new or existing regulations regarding AI medical scribes. 

How does Heidi align with regulatory standards?

We have invested heavily in compliance and security efforts—both within Australia and internationally.

To ensure clinicians from around the world can safely use our AI medical scribe, we have completed the required compliance documentation within a range of jurisdictional guidelines and regulations, such as:

Australia: Australian Privacy Principles (APP)

USA: Health Insurance Portability and Accountability Act (HIPAA)

Canada: Personal Information Protection and Electronic Documents Act (PIPEDA)

European Union: General Data Protection Regulation (GDPR)

United Kingdom: General Data Protection Regulation (GDPR), Digital Technology Assessment Critieria (DTAC), Data Security and Protection Toolkit (DSPT), Clinical Risk Management in the Deployment and Use of Health IT Systems (DBC0129 - Hazard Log, Clinical Safety Case Report, Clinical Case Management Plan), Information Commissioner’s Office AI Toolkit (ICO AI Toolkit)

We have fortified our compliance position with internationally recognized security frameworks, including ISO27001 and SOC2 Type 2.

Our goal has always been to make an AI medical scribe for all clinicians. We recognize that an essential part of this journey is ensuring our tool respects and adheres to all relevant data privacy and security regulations in the locations our users are based.

  1. Integration and workflow issues

Are there inefficiencies related to using an AI medical scribe?

Our experience is that most clinicians substantially reduce their overall documentation time by using an AI medical scribe. However, there are several potential inefficiencies that users should be aware of and safeguard against:

Additional time investments when using an AI medical scribe include:

  • Onboarding and ongoing training. Learning how to use Heidi will help you make the most of it and reduce inefficiencies.
  • Developing and implementing new policies and procedures. Upfront investment to codify Heidi into your organisation will pay dividends. 
  • Reviewing AI-generated documentation for accuracy. Clinicians can find the right time for them to review for example at the end of each patient appointment or particular moments of the day.

Will I definitely save time on clinical documentation by using Heidi?

We have published case studies as a guide to the potential benefits of using Heidi. If you’re not saving time with Heidi then let us know, we’d love to help you.

How does Heidi address integration and workflow challenges?

Heidi is designed by clinicians, for clinicians. As a result, our AI medical scribe fits seamlessly into contemporary clinical workflows. 

Heidi currently integrates with several EHR systems, and we are actively working on new integrations to enhance compatibility. 

Our customer success team provides clinicians and organizations with comprehensive training materials and ongoing support to ensure the smooth adoption and effective use of Heidi.

We also actively seek feedback from our users to continually improve our product. There are multiple channels through which clinicians can submit feature requests, report bugs, and receive responsive support for any issues that may be impacting their productivity.

How do I choose the right medical scribe for me?

With so much choice available but minimal time to investigate, choosing the right AI scribe can be overwhelming. From the thousands of clinician conversations we’ve had, here’s what clinicians have told us is important to consider.

Nothing beats testing the product yourself. Don’t rely on canned demos or videos before you buy a product. Get a free login and role play a short two to three minute consult with a colleague or family member. See how reflective the note is of the consult, how simple it is to edit and adapt it to your particular note taking preferences and how much time it saved you. 

Ensure any tool under consideration is built with safety and security in mind. Review safety documentation and content such as this guide. You can always request more information from a company’s safety and compliance team.

See what others think of AI scribes you are considering. It’s highly likely your colleagues have tried and tested other scribing tools. Their learnings will save you valuable time. Read the reviews of your AI scribing options and not just on official sites like Trustpilot but on Reddit and Facebook too.

Make sure to put the product through its paces throughout a clinic day before committing. Always test the product out across one or more days. This is the litmus test: can the AI medical scribe keep up with you? Seeing how your scribe powers you in the real world is the only way to be truly confident in its capabilities.

What if it isn’t your decision? The best thing any clinician can do is to test out tools like AI medical scribes and advocate for what they know will be the best choice to support them and their colleagues.

It’s the era of clinician empowerment, with medical AI scribes like Heidi by your side

AI medical scribes like Heidi have the potential to help address some of the key challenges currently faced by healthcare systems worldwide, such as clinician burnout, rising costs of care, and access issues.

Clinicians no longer have to suffer the admin in silence. The next generation of tech, like Heidi, is here and it’s built for you, the frontline clinician.

We know bringing AI into your clinical life can be overwhelming. We hope this content goes some way to alleviating your concerns and building your confidence to try an AI medical scribe like Heidi in the real world. If you have more questions, our inbox is always open at hello@heidihealth.com.

Know more. Feel clever.

No-nonsense goodies about the latest in MedTech from your friends at Heidi.

Please insert valid email adress.
You can unsubscribe at any time, no hard feelings.  Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Meet your AI resident.

It’s like you, but less gorgeous.

Try Heidi it’s free