Ink: Non-repudiation for Large Language Models (LLMs) in Healthcare

Martin Chapman, Elliot Fairweather, Christopher Hampson

Research output: Working paper/PreprintPreprint

32 Downloads (Pure)

Abstract

We are increasingly likely to see Large Language Models (LLMs) used in a healthcare context. To address potential issues around liability when this occurs, we introduce Ink, an LLM-backed chatbot that can be used to generate securely signed, non-repudiable records of chats that have taken place. These chats can then be validated at a later date if required. As a proof-of-concept, Ink is connected to several predominant LLMs, including GPT-3.5/4.
Original languageEnglish
Publication statusSubmitted - 2023

Fingerprint

Dive into the research topics of 'Ink: Non-repudiation for Large Language Models (LLMs) in Healthcare'. Together they form a unique fingerprint.

Cite this