Ink: Non-repudiation for Large Language Models (LLMs) in Healthcare

Research output: Chapter in Book/Report/Conference proceedingPoster abstractpeer-review

94 Downloads (Pure)

Abstract

We are increasingly likely to see Large Language Models (LLMs) used in a healthcare context. To address potential issues around liability when this occurs, we introduce Ink, an LLM-backed chatbot that can be used to generate securely signed, non-repudiable records of chats that have taken place. These chats can then be validated at a later date if required. As a proof-of-concept, Ink is connected to several predominant LLMs, including GPT-3.5/4.
Original languageEnglish
Title of host publicationAmerican Medical Informatics Association (AMIA) Informatics Summit
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Ink: Non-repudiation for Large Language Models (LLMs) in Healthcare'. Together they form a unique fingerprint.

Cite this