TRTC

TRTC

new TRTC()

The TRTC object is created using TRTC.create() and provides core real-time audio and video capabilities:

The TRTC lifecycle is shown in the following figure:

Methods

(static) create() → {TRTC}

Create a TRTC object for implementing functions such as entering a room, previewing, publishing, and subscribing streams.

Note:

  • You must create a TRTC object first and call its methods and listen to its events to implement various functions required by the business.
Example
// Create a TRTC object
const trtc = TRTC.create();
Returns:

TRTC object

Type
TRTC

(async) enterRoom(options)

Enter a video call room.

  • Entering a room means starting a video call session. Only after entering the room successfully can you make audio and video calls with other users in the room.
  • You can publish local audio and video streams through startLocalVideo() and startLocalAudio() respectively. After successful publishing, other users in the room will receive the REMOTE_AUDIO_AVAILABLE and REMOTE_VIDEO_AVAILABLE event notifications.
  • By default, the SDK automatically plays remote audio. You need to call startRemoteVideo() to play remote video.
Example
const trtc = TRTC.create();
await trtc.enterRoom({ roomId: 8888, sdkAppId, userId, userSig });
Parameters:
Name Type Description
options object required

Enter room parameters

Properties
Name Type Default Description
sdkAppId number required

sdkAppId
You can obtain the sdkAppId information in the Application Information section after creating a new application by clicking Application Management > Create Application in the TRTC Console.

userId string required

User ID
It is recommended to limit the length to 32 bytes, and only allow uppercase and lowercase English letters (a-zA-Z), numbers (0-9), underscores, and hyphens.

userSig string required

UserSig signature
Please refer to UserSig related for the calculation method of userSig.

roomId number

the value must be an integer between 1 and 4294967294
If you need to use a string type room id, please use the strRoomId parameter. One of roomId and strRoomId must be passed in. If both are passed in, the roomId will be selected first.

strRoomId string

String type room id, the length is limited to 64 bytes, and only supports the following characters:

  • Uppercase and lowercase English letters (a-zA-Z)
  • Numbers (0-9)
  • Space ! # $ % & ( ) + - : ; < = . > ? @ [ ] ^ _ { } | ~ , Note: It is recommended to use a numeric type roomId. The string type room id "123" is not the same room as the numeric type room id 123.
scene string

Application scene, currently supports the following two scenes:

  • TRTC.TYPE.SCENE_RTC (default) Real-time call scene, which is suitable for 1-to-1 audio and video calls, or online meetings with up to 300 participants. Upstream Users Limitation.
  • TRTC.TYPE.SCENE_LIVE Interactive live streaming scene, which is suitable for online live streaming scenes with up to 100,000 people, but you need to specify the role field in the options parameter introduced next.
role string

User role, only meaningful in the TRTC.TYPE.SCENE_LIVE scene, and the TRTC.TYPE.SCENE_RTC scene does not need to specify the role. Currently supports two roles:

  • TRTC.TYPE.ROLE_ANCHOR (default) Anchor
  • TRTC.TYPE.ROLE_AUDIENCE Audience Note: The audience role does not have the permission to publish local audio and video, only the permission to watch remote streams. If the audience wants to interact with the anchor by connecting to the microphone, please switch the role to the anchor through switchRole() before publishing local audio and video.
autoReceiveAudio boolean true

Whether to automatically receive audio. When a remote user publishes audio, the SDK automatically plays the remote user's audio.

autoReceiveVideo boolean false

Whether to automatically receive video. When a remote user publishes video, the SDK automatically subscribes and decodes the remote video. You need to call startRemoteVideo to play the remote video.

enableAutoPlayDialog boolean

Whether to enable the SDK's automatic playback failure dialog box, default: true.

  • Enabled by default. When automatic playback fails, the SDK will pop up a dialog box to guide the user to click the page to restore audio and video playback.
  • Can be set to false in order to turn off. Refer to Handle Autoplay Restriction.
proxy string | ProxyServer

proxy config. Refer to Handle Firewall Restriction.

privateMapKey boolean

Key for entering a room. If permission control is required, please carry this parameter (empty or incorrect value will cause a failure in entering the room).
privateMapKey permission configuration.

Throws:

(async) exitRoom()

Exit the current audio and video call room.

  • After exiting the room, the connection with remote users will be closed, and remote audio and video will no longer be received and played, and the publishing of local audio and video will be stopped.
  • The capture and preview of the local camera and microphone will not stop. You can call stopLocalVideo() and stopLocalAudio() to stop capturing local microphone and camera.
Example
await trtc.exitRoom();
Throws:

(async) switchRole(role, optionopt)

Switches the user role, only effective in TRTC.TYPE.SCENE_LIVE interactive live streaming mode.

In interactive live streaming mode, a user may need to switch between "audience" and "anchor". You can determine the role through the role field in enterRoom(), or switch roles after entering the room through switchRole.

  • Audience switches to anchor, call trtc.switchRole(TRTC.TYPE.ROLE_ANCHOR) to convert the user role to TRTC.TYPE.ROLE_ANCHOR anchor role, and then call startLocalVideo() and startLocalAudio() to publish local audio and video as needed.
  • Anchor switches to audience, call trtc.switchRole(TRTC.TYPE.ROLE_AUDIENCE) to convert the user role to TRTC.TYPE.ROLE_AUDIENCE audience role. If there is already published local audio and video, the SDK will cancel the publishing of local audio and video.

Notice:

  • This interface can only be called after entering the room successfully.
  • After closing the camera and microphone, it is recommended to switch to the audience role in time to avoid the anchor role occupying the resources of 50 upstreams.
Examples
// After entering the room successfully
// TRTC.TYPE.SCENE_LIVE interactive live streaming mode, audience switches to anchor
await trtc.switchRole(TRTC.TYPE.ROLE_ANCHOR);
// Switch from audience role to anchor role and start streaming
await trtc.startLocalVideo();
// TRTC.TYPE.SCENE_LIVE interactive live streaming mode, anchor switches to audience
await trtc.switchRole(TRTC.TYPE.ROLE_AUDIENCE);
// Since v5.3.0+
await trtc.switchRole(TRTC.TYPE.ROLE_ANCHOR, { privateMapKey: 'your new privateMapKey' });
Parameters:
Name Type Description
role string required

User role

  • TRTC.TYPE.ROLE_ANCHOR anchor, can publish local audio and video, up to 50 anchors can publish local audio and video in a single room at the same time.
  • TRTC.TYPE.ROLE_AUDIENCE audience, cannot publish local audio and video, can only watch remote streams, and there is no upper limit on the number of audience members in a single room.
option object
Properties
Name Type Description
privateMapKey string

Since v5.3.0+
The privateMapKey may expire after a timeout, so you can use this parameter to update the privateMapKey.

Throws:

destroy()

Destroy the TRTC instance

After exiting the room, if the business side no longer needs to use trtc, you need to call this interface to destroy the trtc instance in time and release related resources.

Note:

  • The trtc instance after destruction cannot be used again.
  • If you have entered the room, you need to call the TRTC.exitRoom interface to exit the room successfully before calling this interface to destroy trtc.
Example
// When the call is over
await trtc.exitRoom();
// If the trtc is no longer needed, destroy the trtc and release the reference.
trtc.destroy();
trtc = null;
Throws:

(async) startLocalAudio(configopt)

Start collecting audio from the local microphone and publish it to the current room.

  • When to call: can be called before or after entering the room, cannot be called repeatedly.
  • Only one microphone can be opened for a trtc instance. If you need to open another microphone for testing in the case of already opening one microphone, you can create multiple trtc instances to achieve it.
Examples
// Collect the default microphone and publish
await trtc.startLocalAudio();
// The following is a code example for testing microphone volume, which can be used for microphone volume detection.
trtc.enableAudioVolumeEvaluation();
trtc.on(TRTC.EVENT.AUDIO_VOLUME, event => { });
// No need to publish audio for testing microphone
await trtc.startLocalAudio({ publish: false });
// After the test is completed, turn off the microphone
await trtc.stopLocalAudio();
Parameters:
Name Type Description
config object

Configuration item

Properties
Name Type Description
publish boolean

Whether to publish local audio to the room, default is true. If you call this interface before entering the room and publish = true, the SDK will automatically publish after entering the room. You can get the publish state by listening this event PUBLISH_STATE_CHANGED.

mute boolean

Whether to mute microphone. Refer to: Turn On/Off Camera/Mic.

option object

Local audio options

Properties
Name Type Description
microphoneId string

Specify which microphone to use

audioTrack MediaStreamTrack

Custom audioTrack. Custom Capturing and Rendering.

captureVolume number

Set the capture volume of microphone. The default value is 100. Setting above 100 enlarges the capture volume. Since v5.2.1+.

earMonitorVolume number

Set the ear return volume, value range [0, 100], the local microphone is muted by default.

profile string

Audio encoding configuration, default TRTC.TYPE.AUDIO_PROFILE_STANDARD

Throws:

(async) updateLocalAudio(configopt)

Update the configuration of the local microphone.

  • When to call: This interface needs to be called after startLocalAudio() is successful and can be called multiple times.
  • This method uses incremental update: only update the passed parameters, and keep the parameters that are not passed unchanged.
Example
// Switch microphone
const microphoneList = await TRTC.getMicrophoneList();
if (microphoneList[1]) {
  await trtc.updateLocalAudio({ option: { microphoneId: microphoneList[1].deviceId }});
}
Parameters:
Name Type Description
config object
Properties
Name Type Description
publish boolean

Whether to publish local audio to the room. You can get the publish state by listening this event PUBLISH_STATE_CHANGED.

mute boolean

Whether to mute microphone. Refer to: Turn On/Off Camera/Mic.

option object

Local audio configuration

Properties
Name Type Description
microphoneId string

Specify which microphone to use to switch microphones.

audioTrack MediaStreamTrack

Custom audioTrack. Custom Capturing and Rendering.

captureVolume number

Set the capture volume of microphone. The default value is 100. Setting above 100 enlarges the capture volume. Since v5.2.1+.

earMonitorVolume number

Set the ear return volume, value range [0, 100], the local microphone is muted by default.

Throws:

(async) stopLocalAudio()

Stop collecting and publishing the local microphone.

  • If you just want to mute the microphone, please use updateLocalAudio({ mute: true }). Refer to: Turn On/Off Camera/Mic.
Example
await trtc.stopLocalAudio();
Throws:

(async) startLocalVideo(configopt)

Start collecting video from the local camera, play the camera's video on the specified HTMLElement tag, and publish the camera's video to the current room.

  • When to call: can be called before or after entering the room, but cannot be called repeatedly.
  • Only one camera can be started per trtc instance. If you need to start another camera for testing while one camera is already started, you can create multiple trtc instances to achieve this.
Examples
// Preview and publish the camera
await trtc.startLocalVideo({
  view: document.getElementById('localVideo'), // Preview the video on the element with the DOM elementId of localVideo.
});
// Preview the camera without publishing. Can be used for camera testing.
const config = {
  view: document.getElementById('localVideo'), // Preview the video on the element with the DOM elementId of localVideo.
  publish: false // Do not publish the camera
}
await trtc.startLocalVideo(config);
// Call updateLocalVideo when you need to publish the video
await trtc.updateLocalVideo({ publish:true });
// Use a specified camera.
const cameraList = await TRTC.getCameraList();
if (cameraList[0]) {
  await trtc.startLocalVideo({
    view: document.getElementById('localVideo'), // Preview the video on the element with the DOM elementId of localVideo.
    option: {
      cameraId: cameraList[0].deviceId,
    }
  });
}
// use front camera on mobile device.
await trtc.startLocalVideo({ view, option: { useFrontCamera: true }});
// use rear camera on mobile device.
await trtc.startLocalVideo({ view, option: { useFrontCamera: false }});
Parameters:
Name Type Description
config object
Properties
Name Type Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or ID for local video preview. If not passed or passed as null, the video will not be played.

publish boolean

Whether to publish the local video to the room. If you call this interface before entering the room and publish = true, the SDK will automatically publish after entering the room. You can get the publish state by listening this event PUBLISH_STATE_CHANGED.

mute boolean | string

Whether to mute camera. Supports passing in image url string, the image will be published instead of origin camera stream, Other users in the room will receive the REMOTE_AUDIO_AVAILABLE event. It does not support calling when the camera is turned off. More information: Turn On/Off Camera/Mic.

option object

Local video configuration

Properties
Name Type Description
cameraId string

Specify which camera to use for switching cameras.

useFrontCamera boolean

Whether to use the front camera.

videoTrack MediaStreamTrack

Custom videoTrack. Custom Capturing and Rendering.

mirror 'view' | 'publish' | 'both' | boolean

Video mirroring mode, default is 'view'.

  • 'view': You see yourself as a mirror image, and the other person sees you as a non-mirror image.
  • 'publish': The other person sees you as a mirror image, and you see yourself as a non-mirror image.
  • 'both': You see yourself as a mirror image, and the other person sees you as a mirror image.
  • false: Boolean value, represents no mirroring.

Note: Before version 5.3.2, only boolean can be passed, where true represents local preview mirroring, and false represents no mirroring.

fillMode 'contain' | 'cover' | 'fill'

Video fill mode. The default is cover. Refer to the CSS object-fit property.

profile string | VideoProfile

Video encoding parameters for the main video. Default value is 480p_2.

small string | boolean | VideoProfile

Video encoding parameters for the small video. Refer to Multi-Person Video Calls

qosPreference QOS_PREFERENCE_SMOOTH | QOS_PREFERENCE_CLEAR

Set the video encoding strategy for weak networks. Smooth first(default) (QOS_PREFERENCE_SMOOTH) or Clear first (QOS_ PREFERENCE_SMOOTH)

Throws:

(async) updateLocalVideo(configopt)

Update the local camera configuration.

  • This interface needs to be called after startLocalVideo() is successful.
  • This interface can be called multiple times.
  • This method uses incremental update: only updates the passed-in parameters, and keeps the parameters that are not passed in unchanged.
Examples
// Switch camera
const cameraList = await TRTC.getCameraList();
if (cameraList[1]) {
  await trtc.updateLocalVideo({ option: { cameraId: cameraList[1].deviceId }});
}
// Stop publishing video, but keep local preview
await trtc.updateLocalVideo({ publish:false });
Parameters:
Name Type Description
config object
Properties
Name Type Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or Id of the preview camera. If not passed in or passed in null, the video will not be rendered, but still pushes the stream and consumes bandwidth.

publish boolean

Whether to publish the local video to the room. You can get the publish state by listening this event PUBLISH_STATE_CHANGED.

mute boolean | string

Whether to mute camera. Supports passing in image url string, the image will be published instead of origin camera stream, Other users in the room will receive the REMOTE_AUDIO_AVAILABLE event. It does not support calling when the camera is turned off. More information: Turn On/Off Camera/Mic.

option object

Local video configuration

Properties
Name Type Description
cameraId string

Specify which camera to use

useFrontCamera boolean

Whether to use the front camera

videoTrack MediaStreamTrack

Custom videoTrack. Custom Capturing and Rendering.

mirror 'view' | 'publish' | 'both' | boolean

Video mirroring mode, default is 'view'.

  • 'view': You see yourself as a mirror image, and the other person sees you as a non-mirror image.
  • 'publish': The other person sees you as a mirror image, and you see yourself as a non-mirror image.
  • 'both': You see yourself as a mirror image, and the other person sees you as a mirror image.
  • false: Boolean value, represents no mirroring.
fillMode 'contain' | 'cover' | 'fill'

Video fill mode. Refer to the CSS object-fit property

profile string | VideoProfile

Video encoding parameters for the main stream

small string | boolean | VideoProfile

Video encoding parameters for the small video. Refer to Multi-Person Video Calls

qosPreference QOS_PREFERENCE_SMOOTH | QOS_PREFERENCE_CLEAR

Set the video encoding strategy for weak networks. Smooth first (QOS_PREFERENCE_SMOOTH) or Clear first (QOS_ PREFERENCE_SMOOTH)

Throws:

(async) stopLocalVideo()

Stop capturing, previewing, and publishing the local camera.

Example
await trtc.stopLocalVideo();
Throws:

(async) startScreenShare(configopt)

Start screen sharing.

Example
// Start screen sharing
await trtc.startScreenShare();
Parameters:
Name Type Description
config object
Properties
Name Type Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or Id for previewing local screen sharing. If not passed or passed as null, local screen sharing will not be rendered.

publish boolean

Whether to publish screen sharing to the room. The default is true. If you call this interface before entering the room and publish = true, the SDK will automatically publish after entering the room. You can get the publish state by listening this event PUBLISH_STATE_CHANGED.

option object

Screen sharing configuration

Properties
Name Type Default Description
systemAudio boolean

Whether to capture system audio. The default is false.

fillMode 'contain' | 'cover' | 'fill'

Video fill mode. The default is contain, refer to CSS object-fit property.

profile ScreenShareProfile

Screen sharing encoding configuration. Default value is 1080p.

qosPreference QOS_PREFERENCE_SMOOTH | QOS_PREFERENCE_CLEAR

Set the video encoding strategy for weak networks. Smooth first (QOS_PREFERENCE_SMOOTH) or Clear first(default) (QOS_ PREFERENCE_SMOOTH)

captureElement HTMLElement

Capture screen from the specified element of current tab. Available on Chrome 104+.

preferDisplaySurface 'current-tab' | 'tab' | 'window' | 'monitor' 'monitor'

The prefer display surface for screen sharing. Available on Chrome 94+.

  • The default is monitor, which means that monitor capture will be displayed first in the Screen Sharing Capture pre-checkbox。
  • If you fill in 'current-tab', the pre-checkbox will only show the current page.
Throws:

(async) updateScreenShare(configopt)

Update screen sharing configuration

  • This interface needs to be called after startScreenShare() is successful.
  • This interface can be called multiple times.
  • This method uses incremental update: only update the passed-in parameters, and keep the parameters that are not passed-in unchanged.
Example
// Stop screen sharing, but keep the local preview of screen sharing
await trtc.updateScreenShare({ publish:false });
Parameters:
Name Type Description
config object
Properties
Name Type Default Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or Id for screen sharing preview. If not passed in or passed in null, the screen sharing will not be rendered.

publish boolean true

Whether to publish screen sharing to the room

option object

Screen sharing configuration

Properties
Name Type Description
fillMode 'contain' | 'cover' | 'fill'

Video fill mode. The default is contain, refer to CSS object-fit property.

qosPreference QOS_PREFERENCE_SMOOTH | QOS_PREFERENCE_CLEAR

Set the video encoding strategy for weak networks. Smooth first (QOS_PREFERENCE_SMOOTH) or Clear first (QOS_ PREFERENCE_SMOOTH)

Throws:

(async) stopScreenShare()

Stop screen sharing.

Example
await trtc.stopScreenShare();
Throws:

(async) startRemoteVideo(configopt)

Play remote video

Example
trtc.on(TRTC.EVENT.REMOTE_VIDEO_AVAILABLE, ({ userId, streamType }) => {
  // You need to place the video container in the DOM in advance, and it is recommended to use `${userId}_${streamType}` as the element id.
  trtc.startRemoteVideo({ userId, streamType, view: `${userId}_${streamType}` });
})
Parameters:
Name Type Description
config object
Properties
Name Type Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or Id used to play remote video. If not passed or passed null, the video will not be rendered, but the bandwidth will still be consumed.

userId string required

Remote user ID

streamType TRTC.TYPE.STREAM_TYPE_MAIN | TRTC.TYPE.STREAM_TYPE_SUB required

Remote stream type

option object

Remote video configuration

Properties
Name Type Default Description
small boolean

Whether to subscribe small streams

mirror boolean

Whether to enable mirror

fillMode 'contain' | 'cover' | 'fill'

Video fill mode. Refer to the CSS object-fit property.

receiveWhenViewVisible boolean

Since v5.4.0
Subscribe video only when view is visible. Refer to: Multi-Person Video Calls.

viewRoot HTMLElement document.body

Since v5.4.0
The root element is the parent element of the view and is used to calculate whether the view is visible relative to the root. The default value is document.body, and it is recommended that you use the first-level parent of the video view list. Refer to: Multi-Person Video Calls.

Throws:

(async) updateRemoteVideo(configopt)

Update remote video playback configuration

  • This method should be called after startRemoteVideo is successful.
  • This method can be called multiple times.
  • This method uses incremental updates, so only the configuration items that need to be updated need to be passed in.
Example
const config = {
 view: document.getElementById(userId), // you can use a new view to update the position of video.
 userId,
 streamType: TRTC.TYPE.STREAM_TYPE_MAIN
}
await trtc.updateRemoteVideo(config);
Parameters:
Name Type Description
config object
Properties
Name Type Description
view string | HTMLElement | Array.<HTMLElement> | null

The HTMLElement instance or Id used to play remote video. If not passed or passed null, the video will not be rendered, but the bandwidth will still be consumed.

userId string required

Remote user ID

streamType TRTC.TYPE.STREAM_TYPE_MAIN | TRTC.TYPE.STREAM_TYPE_SUB required

Remote stream type

option object

Remote video configuration

Properties
Name Type Default Description
small boolean

Whether to subscribe small streams. Refer to: Multi-Person Video Calls.

mirror boolean

Whether to enable mirror

fillMode 'contain' | 'cover' | 'fill'

Video fill mode. Refer to the CSS object-fit property.

receiveWhenViewVisible boolean

Since v5.4.0
Subscribe video only when view is visible. Refer to: Multi-Person Video Calls.

viewRoot HTMLElement document.body

Since v5.4.0
The root element is the parent element of the view and is used to calculate whether the view is visible relative to the root. The default value is document.body, and it is recommended that you use the first-level parent of the video view list. Refer to: Multi-Person Video Calls.

Throws:

(async) stopRemoteVideo(config)

Used to stop remote video playback.

Example
// Stop playing all remote users
await trtc.stopRemoteVideo({ userId: '*' });
Parameters:
Name Type Description
config object required

Remote video configuration

Properties
Name Type Description
userId string required

Remote user ID, '*' represents all users.

streamType TRTC.TYPE.STREAM_TYPE_MAIN | TRTC.TYPE.STREAM_TYPE_SUB

Remote stream type. This field is required when userId is not '*'.

Throws:

(async) muteRemoteAudio(userId, mute)

Mute a remote user and stop subscribing audio data from that user. Only effective for the current user, other users in the room can still hear the muted user's voice.

Note:

  • By default, after entering the room, the SDK will automatically play remote audio. You can call this interface to mute or unmute remote users.
  • If the parameter autoReceiveAudio = false is passed in when entering the room, remote audio will not be played automatically. When audio playback is required, you need to call this method (mute is passed in false) to play remote audio.
  • This interface is effective before or after entering the room (enterRoom), and the mute state will be reset to false after exiting the room (exitRoom).
  • If you want to continue subscribing audio data from the user but not play it, you can call setRemoteAudioVolume(userId, 0)
Example
// Mute all remote users
await trtc.muteRemoteAudio('*', true);
Parameters:
Name Type Description
userId string required

Remote user ID, '*' represents all users.

mute boolean required

Whether to mute

Throws:

setRemoteAudioVolume(userId, volume)

Used to control the playback volume of remote audio.

  • Not supported by iOS Safari
Example
await trtc.setRemoteAudioVolume('123', 90);
Parameters:
Name Type Description
userId string required

Remote user ID。'*' represents all remote users.

volume number required

Volume, ranging from 0 to 100. The default value is 100.
Since v5.1.3+, the volume can be set higher than 100.

(async) startPlugin(plugin, options) → {Promise.<void>}

start plugin

pluginName name tutorial param
'AudioMixer' Audio Mixer Plugin Music and Audio Effects AudioMixerOptions
'AIDenoiser' AI Denoiser Plugin Implement AI noise reduction AIDenoiserOptions
'VirtualBackground' Virtual Background Plugin Enable Virtual Background VirtualBackgroundOptions
'Watermark' Watermark Plugin Enable Watermark Plugin WatermarkOptions
Parameters:
Name Type Description
plugin PluginName required
options AudioMixerOptions | AIDenoiserOptions | WatermarkOptions required
Returns:
Type
Promise.<void>

(async) updatePlugin(plugin, options) → {Promise.<void>}

Update plugin

pluginName name tutorial param
'AudioMixer' Audio Mixer Plugin Music and Audio Effects UpdateAudioMixerOptions
'VirtualBackground' Virtual Background Plugin Enable Virtual Background VirtualBackgroundOptions
Parameters:
Name Type Description
plugin PluginName required
options UpdateAudioMixerOptions required
Returns:
Type
Promise.<void>

(async) stopPlugin(plugin, optionsopt) → {Promise.<void>}

Stop plugin

pluginName name tutorial param
'AudioMixer' Audio Mixer Plugin Music and Audio Effects StopAudioMixerOptions
'AIDenoiser' AI Denoiser Plugin Implement AI noise reduction
'Watermark' Watermark Plugin Enable Watermark Plugin
Parameters:
Name Type Description
plugin PluginName required
options StopAudioMixerOptions
Returns:
Type
Promise.<void>

enableAudioVolumeEvaluation(intervalopt, enableInBackgroundopt)

Enables or disables the volume callback.

  • After enabling this function, whether someone is speaking in the room or not, the SDK will regularly throw the TRTC.on(TRTC.EVENT.AUDIO_VOLUME) event, which feedbacks the volume evaluation value of each user.
Example
trtc.on(TRTC.EVENT.AUDIO_VOLUME, event => {
   event.result.forEach(({ userId, volume }) => {
       const isMe = userId === ''; // When userId is an empty string, it represents the local microphone volume.
       if (isMe) {
           console.log(`my volume: ${volume}`);
       } else {
           console.log(`user: ${userId} volume: ${volume}`);
       }
   })
});
// Enable volume callback and trigger the event every 1000ms
trtc.enableAudioVolumeEvaluation(1000);
// To turn off the volume callback, pass in an interval value less than or equal to 0
trtc.enableAudioVolumeEvaluation(-1);
Parameters:
Name Type Default Description
interval number 2000

Used to set the time interval for triggering the volume callback event. The default is 2000(ms), and the minimum value is 100(ms). If set to less than or equal to 0, the volume callback will be turned off.

enableInBackground boolean false

For performance reasons, when the page switches to the background, the SDK will not throw volume callback events. If you need to receive volume callback events when the page is switched to the background, you can set this parameter to true.

on(eventName, handler, context)

Listen to TRTC events

For a detailed list of events, please refer to: TRTC.EVENT

Example
trtc.on(TRTC.EVENT.REMOTE_VIDEO_AVAILABLE, event => {
  // REMOTE_VIDEO_AVAILABLE event handler
});
Parameters:
Name Type Description
eventName string required

Event name

handler function required

Event callback function

context context required

Context

off(eventName, handler, context)

Remove event listener

Example
trtc.on(TRTC.EVENT.REMOTE_USER_ENTER, function peerJoinHandler(event) {
  // REMOTE_USER_ENTER event handler
  console.log('remote user enter');
  trtc.off(TRTC.EVENT.REMOTE_USER_ENTER, peerJoinHandler);
});
// Remove all event listeners
trtc.off('*');
Parameters:
Name Type Description
eventName string required

Event name. Passing in the wildcard '*' will remove all event listeners.

handler function required

Event callback function

context context required

Context

getAudioTrack(configopt) → (nullable) {MediaStreamTrack}

Get audio track

Example
// Version before v5.4.3
trtc.getAudioTrack(); // Get local microphone audioTrack, captured by trtc.startLocalAudio()
trtc.getAudioTrack('remoteUserId'); // Get remote audioTrack
// Since v5.4.3+, you can get local screen audioTrack by passing the streamType = TRTC.STREAM_TYPE_SUB
trtc.getAudioTrack({ streamType: TRTC.STREAM_TYPE_SUB });
// Since v5.8.2+, you can get the processed audioTrack by passing processed = true
trtc.getAudioTrack({ processed: true });
Parameters:
Name Type Description
config Object | string

If not passed, get the local microphone audioTrack

Properties
Name Type Default Description
userId string

If not passed or passed an empty string, get the local audioTrack. Pass the userId of the remote user to get the remote user's audioTrack.

streamType STREAM_TYPE_MAIN | STREAM_TYPE_SUB

stream type:

  • TRTC.TYPE.STREAM_TYPE_MAIN: Main stream (user's microphone)(default)
  • TRTC.TYPE.STREAM_TYPE_SUB: Sub stream (user's screen sharing audio). Only works for local screen sharing audio because there is only one remote audioTrack, and there is no distinction between Main and Sub for remote audioTrack.
processed boolean false

Whether to get the processed audioTrack. The processed audioTrack is the audioTrack after the SDK processes the audio frame, such as ai-denose, gain, mix. The default value is false.

Returns:

Audio track

Type
MediaStreamTrack

getVideoTrack(configopt) → {MediaStreamTrack|null}

Get video track

Example
// Get local camera videoTrack
const videoTrack = trtc.getVideoTrack();
// Get local screen sharing videoTrack
const screenVideoTrack = trtc.getVideoTrack({ streamType: TRTC.TYPE.STREAM_TYPE_SUB });
// Get remote user's main stream videoTrack
const remoteMainVideoTrack = trtc.getVideoTrack({ userId: 'test', streamType: TRTC.TYPE.STREAM_TYPE_MAIN });
// Get remote user's sub stream videoTrack
const remoteSubVideoTrack = trtc.getVideoTrack({ userId: 'test', streamType: TRTC.TYPE.STREAM_TYPE_SUB });
// Since v5.8.2+, you can get the processed videoTrack by passing processed = true
const processedVideoTrack = trtc.getVideoTrack({ processed: true });
Parameters:
Name Type Description
config string

If not passed, get the local camera videoTrack

Properties
Name Type Default Description
userId string

If not passed or passed an empty string, get the local videoTrack. Pass the userId of the remote user to get the remote user's videoTrack.

streamType STREAM_TYPE_MAIN | STREAM_TYPE_SUB

stream type:

processed boolean false

Whether to get the processed videoTrack. The processed videoTrack is the videoTrack after the SDK processes the video frame, such as visualbackground, mirror, watermark. The default value is false.

Returns:

Video track

Type
MediaStreamTrack | null

getVideoSnapshot()

Get video snapshot
Notice: must play the video before it can obtain the snapshot. If there is no playback, an empty string will be returned.

Since:
  • 5.4.0
Example
// get self main stream video frame
trtc.getVideoSnapshot()
// get self sub stream video frame
trtc.getVideoSnapshot({streamType:TRTC.TYPE.STREAM_TYPE_SUB})
// get remote user main stream video frame
trtc.getVideoSnapshot({userId: 'remote userId', streamType:TRTC.TYPE.STREAM_TYPE_MAIN})
Parameters:
Name Type Description
config.userId string required

Remote user ID

config.streamType TRTC.TYPE.STREAM_TYPE_MAIN | TRTC.TYPE.STREAM_TYPE_SUB required

sendSEIMessage(buffer, optionsopt)

Send SEI Message

The header of a video frame has a header block called SEI. The principle of this interface is to use the SEI to embed the custom data you want to send along with the video frame. SEI messages can accompany video frames all the way to the live CDN.

Applicable scenarios: synchronization of lyrics, live answering questions, etc.

When to call: call after trtc.startLocalVideo or trtc.startLocalScreen when set 'toSubStream' option to true successfully.

Note:

  1. Maximum 1KB(Byte) sent in a single call, maximum 30 calls per second, maximum 8KB sent per second.
  2. Supported browsers: Chrome 86+, Edge 86+, Opera 72+, Safari 15.4+, Firefox 117+. Safari and Firefox are supported since v5.8.0.
  3. Since SEI is sent along with video frames, there is a possibility that video frames may be lost, and therefore SEI may be lost as well. The number of times it can be sent can be increased within the frequency limit, and the business side needs to do message de-duplication on the receiving side.
  4. SEI cannot be sent without trtc.startLocalVideo(or trtc.startLocalScreen when set 'toSubStream' option to true); SEI cannot be received without startRemoteVideo.
  5. Only H264 encoder is supported to send SEI.
Since:
  • v5.3.0
See:
Example
// 1. enable SEI
const trtc = TRTC.create({
   enableSEI: true
})
// 2. send SEI
try {
 await trtc.enterRoom({
  userId: 'user_1',
  roomId: 12345,
})
 await trtc.startLocalVideo();
 const unit8Array = new Uint8Array([1, 2, 3]);
 trtc.sendSEIMessage(unit8Array.buffer);
} catch(error) {
 console.warn(error);
}
// 3. receive SEI
trtc.on(TRTC.EVENT.SEI_MESSAGE, event => {
 console.warn(`sei ${event.data} from ${event.userId}`);
})
Parameters:
Name Type Description
buffer ArrayBuffer required

SEI data to be sent

options Object
Properties
Name Type Default Description
seiPayloadType Number required

Set the SEI payload type. SDK uses the custom payloadType 243 by default, the business side can use this parameter to set the payloadType to the standard 5. When the business side uses the 5 payloadType, you need to follow the specification to make sure that the first 16 bytes of the buffer are the business side's customized uuid.

toSubStream Boolean false

Send SEI data to substream. Need call trtc.startLocalScreen first. Since v5.7.0+.

sendCustomMessage(message)

Send Custom Message to all remote users in the room.

Note:

  1. Only TRTC.TYPE.ROLE_ANCHOR can call sendCustomMessage.
  2. You should call this api after TRTC.enterRoom successfully.
  3. The custom message will be sent in order and as reliably as possible, but it's possible to loss messages in a very bad network. The receiver will also receive the message in order.
Since:
  • v5.6.0
See:
Example
// send custom message
trtc.sendCustomMessage({
  cmdId: 1,
  data: new TextEncoder().encode('hello').buffer
});
// receive custom message
trtc.on(TRTC.EVENT.CUSTOM_MESSAGE, event => {
   // event.userId: remote userId.
   // event.cmdId: message cmdId.
   // event.seq: message sequence number.
   // event.data: custom message data, type is ArrayBuffer.
   console.log(`received custom msg from ${event.userId}, message: ${new TextDecoder().decode(event.data)}`)
})
Parameters:
Name Type Description
message object required
Properties
Name Type Description
cmdId number required

message Id. Integer, range [1, 10]. You can set different cmdId for different types of messages to reduce the delay of transferring message.

data ArrayBuffer required

message content.

  • Maximum 1KB(Byte) sent in a single call.
  • Maximum 30 calls per second
  • Maximum 8KB sent per second.

(static) setLogLevel(levelopt, enableUploadLogopt)

Set the log output level
It is recommended to set the DEBUG level during development and testing, which includes detailed prompt information. The default output level is INFO, which includes the log information of the main functions of the SDK.

Example
// Output log levels above DEBUG
TRTC.setLogLevel(1);
Parameters:
Name Type Default Description
level 0-5

Log output level 0: TRACE 1: DEBUG 2: INFO 3: WARN 4: ERROR 5: NONE

enableUploadLog boolean true

Whether to enable log upload, which is enabled by default. It is not recommended to turn it off, which will affect problem troubleshooting.

(static) isSupported() → {Promise.<object>}

Check if the TRTC Web SDK is supported by the current browser

Example
TRTC.isSupported().then((checkResult) => {
  if(!checkResult.result) {
     console.log('checkResult', checkResult.result, 'checkDetail', checkResult.detail);
     // The SDK is not supported by the current browser, guide the user to use the latest version of Chrome browser.
  }
});
Returns:

Promise returns the detection result

Property Type Description
checkResult.result boolean Detection result
checkResult.detail.isBrowserSupported boolean Whether the current browser is supported by the SDK
checkResult.detail.isWebRTCSupported boolean Whether the current browser supports WebRTC
checkResult.detail.isWebCodecsSupported boolean Whether the current browser supports WebCodecs
checkResult.detail.isMediaDevicesSupported boolean Whether the current browser supports obtaining media devices and media streams
checkResult.detail.isScreenShareSupported boolean Whether the current browser supports screen sharing
checkResult.detail.isSmallStreamSupported boolean Whether the current browser supports small streams
checkResult.detail.isH264EncodeSupported boolean Whether the current browser supports H264 encoding for uplink
checkResult.detail.isH264DecodeSupported boolean Whether the current browser supports H264 decoding for downlink
checkResult.detail.isVp8EncodeSupported boolean Whether the current browser supports VP8 encoding for uplink
checkResult.detail.isVp8DecodeSupported boolean Whether the current browser supports VP8 decoding for downlink
Type
Promise.<object>

(static) getCameraList(requestPermissionopt) → {Promise.<Array.<MediaDeviceInfo>>}

Returns the list of camera devices
Note

  • This interface does not support use under the http protocol, please use the https protocol to deploy your website. Privacy and security
  • You can call the browser's native interface getCapabilities to get the maximum resolutions supported by the camera, frame rate, mobile devices to distinguish between front and rear cameras, etc. This interface supports Chrome 67+, Edge 79+, Safari 17+, Opera 54+.
Example
const cameraList = await TRTC.getCameraList();
if (cameraList[0] && cameraList[0].getCapabilities) {
  const { width, height, frameRate, facingMode } = cameraList[0].getCapabilities();
  console.log(width.max, height.max, frameRate.max);
  if (facingMode) {
    if (facingMode[0] === 'user') {
      // front camera
    } else if (facingMode[0] === 'environment') {
      // rear camera
    }
  }
}
Parameters:
Name Type Default Description
requestPermission boolean true

Since v5.6.3. Whether to request permission to use the camera. If requestPermission is true, calling this method may temporarily open the camera to ensure that the camera list can be normally obtained, and the SDK will automatically stop the camera capture later.

Returns:

Promise returns an array of MediaDeviceInfo

Type
Promise.<Array.<MediaDeviceInfo>>

(static) getMicrophoneList(requestPermissionopt) → {Promise.<Array.<MediaDeviceInfo>>}

Returns the list of microphone devices
Note

  • This interface does not support use under the http protocol, please use the https protocol to deploy your website. Privacy and security
  • You can call the browser's native interface getCapabilities to get information about the microphone's capabilities, e.g. the maximum number of channels supported, etc. This interface supports Chrome 67+, Edge 79+, Safari 17+, Opera 54+.
  • On Android, there are usually multiple microphones, and the label list is: ['default', 'Speakerphone', 'Headset earpiece'], if you do not specify the microphone in trtc.startLocalAudio, the browser default microphone may be the Headset earpiece and the sound will come out of the handset. If you need to play out through the speaker, you need to specify the microphone with the label 'Speakerphone'.
Example
const microphoneList = await TRTC.getMicrophoneList();
if (microphoneList[0] && microphoneList[0].getCapabilities) {
  const { channelCount } = microphoneList[0].getCapabilities();
  console.log(channelCount.max);
}
Parameters:
Name Type Default Description
requestPermission boolean true

Since v5.6.3. Whether to request permission to use the microphone. If requestPermission is true, calling this method may temporarily open the microphone to ensure that the microphone list can be normally obtained, and the SDK will automatically stop the microphone capture later.

Returns:

Promise returns an array of MediaDeviceInfo

Type
Promise.<Array.<MediaDeviceInfo>>

(static) getSpeakerList(requestPermissionopt) → {Promise.<Array.<MediaDeviceInfo>>}

Returns the list of speaker devices. Only support PC browser, not support mobile browser.

Parameters:
Name Type Default Description
requestPermission boolean true

Since v5.6.3. Whether to request permission to use the microphone. If requestPermission is true, calling this method may temporarily open the microphone to ensure that the microphone list can be normally obtained, and the SDK will automatically stop the microphone capture later.

Returns:

Promise returns an array of MediaDeviceInfo

Type
Promise.<Array.<MediaDeviceInfo>>

(async, static) setCurrentSpeaker(speakerId)

Set the current speaker for audio playback

Parameters:
Name Type Description
speakerId string required

Speaker ID