Eka Care Ekascribe TypeScript SDK Integration

This guide explains how to integrate the Eka Care’s Ekascribe TS SDK.

Overview

Ekascribe allows you to transcribe audio files into structured medical data. The TS SDK simplifies this integration, by handling auth internally and exposing 2 simple functions uploadAudioFileAndStartTranscription and getTranscribedData

Eka Care supports multiple output templates for different medical documentation needs. Choose the appropriate template ID based on your requirements:

Template IDDescriptionUse Case
clinical_notes_templateComprehensive clinical notes with structured medical informationGeneral clinical documentation, patient consultations
eka_emr_templateEMR-compatible format for electronic medical recordsIntegration with EMR systems
transcript_templateBasic transcription with minimal structuringSimple audio-to-text conversion

Supported Input Languages

Eka Care supports transcription in multiple languages. Specify the appropriate language ID in the input_language parameter:

Language IDLanguage Name
en-INEnglish (India)
en-USEnglish (United States)
hiHindi
guGujarati
knKannada
mlMalayalam
taTamil
teTelugu
bnBengali
mrMarathi
paPunjabi

Step 1: Register Webhook (One-time Setup)

Before using the Ekascribe service, register a webhook to receive notifications when transcription is complete:

curl --request POST \
  --url https://api.eka.care/notification/v1/connect/webhook/subscriptions \
  --header 'Authorization: Bearer <auth_token>' \
  --header 'Content-Type: application/json' \
  --data '{
    "event_names": [
      "v2rx.completed"
    ],
    "endpoint": "your-endpoint-url",
    "signing_key": "supersecretkey",
    "protocol": "https"
  }'

then from any FE or BE JS based runtimes prepare your payload with the voice recordings and additional data

psudo react code to execute on click of a submit button

const handleSubmit = async () => {
  const response =
    await ekaInstance.ekaScribe.uploadAudioFileAndStartTranscription({
      txn_id: "717312jkjs212313jj1",
      files: selectedFiles, // array of audio files. can be taken from the user/your app with the <input /> tag.
      input_language: ["en-US"],
      output_format_template: [
        {
          codification_needed: true,
          template_id: "clinical_notes_template",
        },
        {
          codification_needed: false,
          template_id: "eka_emr_template",
        },
      ],
      mode: "consultation",
      transfer: "non-vaded",
    });
  console.log(response, "response");
};

psudo code when uploading from a Node JS environment

(async () => {
  try {
    await ekaInstance.ekaScribe.uploadAudioFileAndStartTranscription({
      files: ["/Users/govindganeriwal/Desktop/testReco1.m4a"],
      input_language: ["en-US", "en-IN"],
      output_format_template: [
        {
          codification_needed: true,
          template_id: "clinical_notes_template",
        },
        {
          codification_needed: false,
          template_id: "eka_emr_template",
        },
      ],
      mode: "consultation",
      transfer: "non-vaded",
      txn_id: "9f14c5ba-ee0b-4642-8f38-4",
    });
  } catch (e) {
    console.log("error", e);
  }
})();

and then to get the result back, psudo code

const transcribedData = await ekaInstance.ekaScribe.getTranscribedData({
  session_id: `9f14c5ba-ee0b-4642-8f38-4` // is the same as transaction id
})
console.log(transcribedData.data.output);

// when the transcription is successful it returns this kind of a response

{
"data": {
    "output": [
        {
            "template_id": "clinical_notes_template",
            "value": "<base 64 encoded data>",
            "type": "markdown",
            "name": "Clinical Notes",
            "status": "success",
            "errors": [],
            "warnings": []
        },
        {
            "template_id": "eka_emr_template",
            "value": "<base 64 encoded data>",
            "type": "eka_emr",
            "name": "Eka EMR Format",
            "status": "success",
            "errors": [],
            "warnings": []
        }
    ],
    "additional_data": {}
}
}
// make sure to decode the data.output.value —— could be something as simple as using `atob`

Combined usage

Combined usage of both functions

// Example: Upload audio and fetch the transcribed output

(async () => {
  try {
    // Step 1: Upload audio file and start transcription
    await ekaInstance.ekaScribe.uploadAudioFileAndStartTranscription({
      files: ["/path/to/audio-file.m4a"], // Accepts File objects (frontend) or file paths (backend)
      input_language: ["en-US"], // Language(s) spoken in the audio
      output_format_template: [
        {
          template_id: "clinical_notes_template",
          codification_needed: true,
        },
        {
          template_id: "eka_emr_template",
          codification_needed: false,
        },
      ],
      mode: "consultation",           // Use-case context
      transfer: "non-vaded",          // Transfer type: 'vaded' | 'non-vaded'
      txn_id: "unique-transaction-id", // Used to track the transcription session
    });

    /**
     * Step 2:
     * In production, you would ideally not use setTimeout.
     * Instead:
     *   1. Register a webhook with your server.
     *   2. Once you receive the callback from the webhook (v2rx.completed),
     *      call `getTranscribedData` using the corresponding `txn_id` or `session_id`.
     *
     * For demonstration purposes, we simulate a delay using setTimeout.
     */
    setTimeout(async () => {
      const response = await ekaInstance.ekaScribe.getTranscribedData({
        session_id: "associated-session-id" or the "transaction_id" with which you initiated,
      });

      console.log("Transcribed Output:", response.data.output);
    }, 20000); // Simulate wait: 20 seconds
  } catch (error) {
    console.error("Transcription error:", error);
  }
})();