Constructor
new TRTCCloud()
Example
// Create TRTCCloud object
import TRTCCloud from 'trtc-electron-sdk';
const rtcCloud = new TRTCCloud();
// Get the SDK version number
const version = rtcCloud.getSDKVersion();
Methods
(static) getTRTCShareInstance() → {TRTCCloud}
Create TRTCCloud
main instance (singleton mode)
Returns:
- Type
- TRTCCloud
(static) destroyTRTCShareInstance()
Terminate TRTCCloud
main instance (singleton mode)
Note: All sub-instance created by main instance will also be terminated.
createSubCloud() → {TRTCCloud}
Create TRTCCloud
sub-instance
Note: Only main instance can create sub-instance. Sub-instance cannot create sub sub-instance.
Example
import TRTCCloud from 'trtc-electron-sdk';
const rtcCloud = TRTCCloud.getTRTCShareInstance();
rtcCloud.startLocalAudio(); // start microphone audio capture and publishing
const childRtcCloud = rtcCloud.createSubCloud();
childRtcCloud.startSystemAudioLoopback(); // start system audio capture and publishing
Returns:
- Type
- TRTCCloud
getConfigObject() → {TRTCConfig}
Get TRTCConfig object
TRTCConfig
object can be used to open debug mode
Example
// Enable debug mode
import TRTCCloud import'trtc-electron-sdk';
const rtcCloud = new TRTCCloud();
rtcCloud.getConfigObject().setDebugMode(true);
Returns:
- Type
- TRTCConfig
destroy()
Terminate current TRTCCloud
instance
enterRoom(params, scene)
Enter Room
After calling this API, you will receive the onEnterRoom(result)
event notification:
- If room entry succeeded, the
result
parameter will be a positive number (result
> 0), indicating the time in milliseconds (ms) between function call and room entry. - If room entry failed, the
result
parameter will be a negative number (result
< 0), indicating the error code for room entry failure.
Parameter "scene" can be one of the following value:
TRTCAppScene.TRTCAppSceneVideoCall
:
Video call scenario. Use cases: [one-to-one video call], [video conferencing with up to 300 participants], [online medical diagnosis], [small class], [video interview], etc. In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.TRTCAppScene.TRTCAppSceneAudioCall
:
Audio call scenario. Use cases: [one-to-one audio call], [audio conferencing with up to 300 participants], [audio chat], [online Werewolf], etc. In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.TRTCAppScene.TRTCAppSceneLIVE
:
Live streaming scenario. Use cases: [low-latency video live streaming], [interactive classroom for up to 100,000 participants], [live video competition], [video dating room], [remote training], [large-scale conferencing], etc. In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor
) or audience (TRTCRoleAudience
).TRTCAppScene.TRTCAppSceneVoiceChatRoom
:
Audio chat room scenario. Use cases: [Clubhouse], [online karaoke room], [music live room], [FM radio], etc. In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor
) or audience (TRTCRoleAudience
).
Notice:
- If
scene
is specified asTRTCAppScene.TRTCAppSceneLIVE
orTRTCAppScene.TRTCAppSceneVoiceChatRoom
, you must use therole
field inTRTCParams
to specify the role of the current user in the room. - The same
scene
should be configured for all users in the same room. - Please try to ensure that
TRTCCloud.enterRoom
andTRTCCloud.exitRoom
are used in pair; that is, please make sure that "the previous room is exited before the next room is entered"; otherwise, many issues may occur.
Parameters:
Name | Type | Description | |||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCParams |
required
Room entry parameter Properties
|
|||||||||||||||||||||||||||||||||
scene |
TRTCAppScene |
required
Application scenario,current supported scenes:VideoCall, Live, AudioCall, VoiceChatRoom. |
exitRoom()
Exit room
Calling this API will allow the user to leave the current audio or video room and release the camera, mic, speaker, and other device resources.
After resources are released, the SDK will emit the onExitRoom()
event to notify you.
If you need to call enterRoom
again or switch to the SDK of another provider, we recommend you wait until you receive the onExitRoom()
event, so as to avoid the problem of the camera or microphine being occupied.
switchRoom(params)
Switch room
This API is used to quickly switch a user from one room to another.
- If the user's role is "audience", calling this API is equivalent to
exitRoom
(current room) +enterRoom
(new room). - If the user's role is "anchor", the API will retain the current audio/video publishing status while switching the room; therefore, during the room switch, camera preview and sound capturing will not be interrupted.
This API is suitable for the online education scenario where the supervising teacher can perform fast room switch across multiple rooms. In this scenario, using
switchRoom
can get better smoothness and use less code thanexitRoom + enterRoom
. After calling this API, anonSwitchRoom(errCode, errMsg)
event will be emitted.
Parameters:
Name | Type | Description |
---|---|---|
params |
TRTCSwitchRoomParam |
required
Room parameter. For more information, please see |
switchRole(role)
Switch role
This API is used to switch the user role between "anchor" and "audience".
As video live rooms and audio chat rooms need to support an audience of up to 100,000 concurrent online users, the rule "only anchors can publish their audio/video streams" has been set. Therefore, when some users want to publish their streams
(so that they can interact with anchors), they need to switch their role to "anchor" first. You can use the role
field in TRTCParams
during room entry to specify the user role in advance or use the switchRole
API to switch roles
after room entry.
Notice:
- This API is only applicable to two scenarios: live streaming (
TRTCAppSceneLIVE
) and audio chat room (TRTCAppSceneVoiceChatRoom
). - If the
scene
you specify inenterRoom
isTRTCAppSceneVideoCall
orTRTCAppSceneAudioCall
, call this API will not work.
Parameters:
Name | Type | Description |
---|---|---|
role |
TRTCRoleType |
required
Role, which is "anchor" by default:
|
connectOtherRoom(params)
Request cross-room call
By default, only users in the same room can make audio/video calls with each other, and the audio/video streams in different rooms are isolated from each other.
However, you can publish the audio/video streams of an anchor in another room to the current room by calling this API. At the same time, this API will also publish the local audio/video streams to the target anchor's room.
In other words, you can use this API to share the audio/video streams of two anchors in two different rooms, so that the audience in each room can watch the streams of these two anchors. This feature can be used to implement anchor
competition. The result of requesting cross-room call will be returned through the onConnectOtherRoom()
event. For example, after anchor A in room "101" uses connectOtherRoom()
to successfully call anchor B
in room "102":
- All users in room "101" will receive the
onRemoteUserEnterRoom(B)
andonUserVideoAvailable(B,1)
event of anchor B; that is, all users in room "101" can subscribe to the audio/video streams of anchor B. - All users in room "102" will receive the
onRemoteUserEnterRoom(A)
andonUserVideoAvailable(A,1)
event of anchor A; that is, all users in room "102" can subscribe to the audio/video streams of anchor A.
For short, with connectOtherRoom
, tow anchor in two different room can connect together, and all users in the room can see both anchor.
Room 101 Room 102 --------------------- --------------------- Before cross-room call: | Anchor: A | | Anchor: B | | Users : U, V, W | | Users: X, Y, Z | --------------------- --------------------- Room 101 Room 102 --------------------- --------------------- After cross-room call: | Anchors: A and B | | Anchors: B and A | | Users : U, V, W | | Users : X, Y, Z | --------------------- ---------------------
For compatibility with subsequent extended fields for cross-room call, parameters in JSON format are used currently. If anchor A in room "101" wants to co-anchor with anchor B in room "102", then anchor A needs to pass in {"roomId": 102, "userId": "userB"} when calling this API.
After calling this API, the result will be returned through the onConnectOtherRoom
event notification.
Example
let json = JSON.stringify({roomId: 2, userId: "userB"});
rtcCloud.connectOtherRoom(json);
Parameters:
Name | Type | Description |
---|---|---|
params |
String |
required
You need to pass in a string parameter in JSON format: |
disconnectOtherRoom()
Exit cross-room call
The result will be returned through the onDisconnectOtherRoom
event notification.
setDefaultStreamRecvMode(autoRecvAudio, autoRecvVideo)
Set subscription mode
You can switch between the "automatic subscription" and "manual subscription" modes through this API:
- Automatic subscription: this is the default mode, where the user will immediately receive the audio/video streams in the room after room entry, so that the audio will be automatically played back, and the video will be automatically decoded
(you still need to bind the rendering control through the
startRemoteView
API). - Manual subscription: after room entry, the user needs to manually call the {@startRemoteView} API to start subscribing to and decoding the video stream and call the
{@muteRemoteAudio} (false)
API to start playing back the audio stream. In most scenarios, users will subscribe to the audio/video streams of all anchors in the room after room entry. Therefore, TRTC adopts the automatic subscription mode by default in order to achieve the best "instant streaming experience". In your application scenario, if there are many audio/video streams being published at the same time in each room, and each user only wants to subscribe to 1–2 streams of them, we recommend you use the "manual subscription" mode to reduce the traffic costs.
Notice: Must be set before room entry for it to take effect.
YES: automatic subscription to audio; NO: manual subscription to audio by calling muteRemoteAudio(false)
. Default value: YES
Parameters:
Name | Type | Description |
---|---|---|
autoRecvAudio |
Boolean |
required
true: automatic subscription to audio; false:need manual subscription to audio by calling |
autoRecvVideo |
Boolean |
required
true: automatic subscription to video; false:need manual subscription to video by calling |
startPublishing(streamId, type)
Start publishing audio/video streams to Tencent Cloud CSS CDN
This API sends a command to the TRTC server, requesting it to relay the current user's audio/video streams to CSS CDN.
You can set the StreamId
of the live stream through the streamId
parameter, so as to specify the playback address of the user's audio/video streams on CSS CDN.
For example, if you specify the current user's live stream ID as user_stream_001
through this API, then the corresponding CDN playback address is:
"http://yourdomain/live/user_stream_001.flv", where yourdomain
is your playback domain name with an ICP filing.
You can configure your playback domain name in the CSS console. Tencent Cloud does not provide a default playback domain name.
You can also specify the streamId
when setting the TRTCParams
parameter of enterRoom
, which is the recommended approach.
Notice: You need to enable the "Enable Relayed Push" option on the "Function Configuration" page in the TRTC console in advance.
- If you select "Specified stream for relayed push", you can use this API to push the corresponding audio/video stream to Tencent Cloud CDN and specify the entered stream ID.
- If you select "Global auto-relayed push", you can use this API to adjust the default stream ID.
Example
const trtcCloud = TRTCCloud.getTRTCShareInstance();
trtcCloud.enterRoom(params, TRTCAppScene.TRTCAppSceneLIVE);
trtcCloud.startLocalPreview(view);
trtcCloud.startLocalAudio(TRTCAudioQuality.TRTCAudioQualityDefault);
trtcCloud.startPublishing("user_stream_001", TRTCVideoStreamType.TRTCVideoStreamTypeBig);
Parameters:
Name | Type | Description |
---|---|---|
streamId |
String |
required
Custom stream ID. |
type |
TRTCVideoStreamType |
required
Only |
stopPublishing()
Stop publishing audio/video streams to Tencent Cloud CSS CDN
startPublishCDNStream(param)
Start publishing audio/video streams to non-Tencent Cloud CDN
This API is similar to the startPublishing
API. The difference is that startPublishing
can only publish audio/video streams to Tencent Cloud CDN, while this API can relay streams to live streaming CDN services of other cloud providers.
Notice:
- Using the
startPublishing
API to publish audio/video streams to Tencent Cloud CSS CDN does not incur additional fees. - Using the
startPublishCDNStream
API to publish audio/video streams to non-Tencent Cloud CDN incurs additional relaying bandwidth fees.
Parameters:
Name | Type | Description | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
param |
TRTCPublishCDNParam |
required
CDN relaying parameter. Properties
|
stopPublishCDNStream()
Stop publishing audio/video streams to non-Tencent Cloud CDN
setMixTranscodingConfig(config)
Set the layout and transcoding parameters of On-Cloud MixTranscoding
In a live room, there may be multiple anchors publishing their audio/video streams at the same time, but for audience on CSS CDN, they only need to watch one video stream in HTTP-FLV or HLS format.
When you call this API, the SDK will send a command to the TRTC mixtranscoding server to combine multiple audio/video streams in the room into one stream.
You can use the TRTCTranscodingConfig
parameter to set the layout of each channel of image. You can also set the encoding parameters of the mixed audio/video streams.
For more information, please see On-Cloud MixTranscoding.
**Image 1** => decoding ====> \\ \\ **Image 2** => decoding => image mixing => encoding => **mixed image** // **Image 3** => decoding ====> // **Audio 1** => decoding ====> \\ \\ **Audio 2** => decoding => audio mixing => encoding => **mixed audio** // **Audio 3** => decoding ====> //
Parameters:
Name | Type | Description | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
config |
TRTCTranscodingConfig |
required
If Properties
|
startLocalPreview(view)
Enable the preview image of local camera
If this API is called before enterRoom
, the SDK will only enable the camera and wait until enterRoom
is called before starting push.
If it is called after enterRoom
, the SDK will enable the camera and automatically start pushing the video stream.
When the first camera video frame starts to be rendered, you will receive the onFirstVideoFrame
event.
This API will start system default camera, if you want to change camera devce, you can use setCurrentCameraDevice()
to choose other camera device.
Parameters:
Name | Type | Description |
---|---|---|
view |
HTMLElement |
required
HTML Element that will carry the video image |
stopLocalPreview()
Stop camera preview
updateLocalView(view)
Update the HTML element used to hold local camera video
This interface can be used in cases where you want to change the place where you preview your local camera on the HTML page.
Notice: You should call this interface after calling startLocalPreview()
, or else it will not work.
- If you has called
startLocalPreview(view)
with an valid HTML parameter, you can call this interface to change the HTML element holding the captured video from your camera - If you has called
startLocalPreview(null)
withnull
, you can call this interface to start preview the captured video from your camera
Parameters:
Name | Type | Description |
---|---|---|
view |
HTMLElement | null |
required
An HTML element to hold the captured video from your camera.
|
setCameraCaptureParams(params)
Set camera acquisition parameter
Notice: Only Windows
operating system supported.
Parameters:
Name | Type | Description |
---|---|---|
params |
TRTCCameraCaptureParams |
required
Camera acquisition parameter |
muteLocalVideo(mute, streamType)
Pause/Resume publishing local video stream
This API can pause (or resume) publishing the local video image. After the pause, other users in the same room will not be able to see the local image.
This API is equivalent to the two APIs of startLocalPreview/stopLocalPreview
when TRTCVideoStreamTypeBig is specified, but has higher performance and response speed.
The startLocalPreview/stopLocalPreview
APIs need to enable/disable the camera, which are hardware device-related operations, so they are very time-consuming.
In contrast, muteLocalVideo
only needs to pause or allow the data stream at the software level, so it is more efficient and more suitable for scenarios where frequent enabling/disabling are needed.
After local video publishing is paused, other members in the same room will receive the onUserVideoAvailable(userId, 0)
event notification.
After local video publishing is resumed, other members in the same room will receive the onUserVideoAvailable(userId, 1)
event notification.
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
true: pause; false: resume, default value: false |
streamType |
TRTCVideoStreamType |
required
Video stream type to be paused or resumed |
setVideoMuteImage(imageBuffer, fps)
Set placeholder image during local video pause
When you call muteLocalVideo(true) to pause the local video image, you can set a placeholder image by calling this API. Then, other users in the room will see this image instead of a black screen.
Parameters:
Name | Type | Description |
---|---|---|
imageBuffer |
TRTCImageBuffer |
required
Placeholder image. A null value means that no more video stream data will be sent after muteLocalVideo . The default value is null. |
fps |
Number |
required
Frame rate of the placeholder image. Minimum value: 5. Maximum value: 10. Default value: 5 |
startRemoteView(userId, view, streamType)
Subscribe to a remote user's video stream and bind it to a video rendering control
If you call this API, the SDK will pull the video stream of the specified userId
and render it to the rendering control specified by the view
parameter. You can set the display mode of the video image using setRemoteRenderParams
.
- If you already know the
userId
of the user who is publishing video in the room, you can callstartRemoteView
to subscribe to the user's video. - If you don't know who is publishing video in the room, you can wait for the
onUserVideoAvailable
event after a remote user callsenterRoom
.
If you receive the onUserVideoAvailable(userId, 1)
event from the SDK, it indicates that the remote user has enabled video.
After receiving this event, call startRemoteView(userId)
to load the user’s video. You can use a loading animation to improve user experience during the waiting period.
When the first video frame of this user is displayed, you will receive the onFirstVideoFrame(userId)
event.
Notes:
- The SDK supports playing a user’s big/small image and substream image at the same time, but does not support playing the big and small images at the same time.
- The small image of a user can be played only if the user has called
enableSmallVideoStream
to enable the dual-stream mode. - If the small image of the specified user does not exist, the SDK will switch to the big image.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
The ID of the remote user whose video is to be played. |
view |
HTMLElement |
required
The HTML element that will carry the video image. |
streamType |
TRTCVideoStreamType |
required
Which stream of the user to play:
|
stopRemoteView(userId, streamType)
Stop subscribing to remote user's video stream and release rendering control
Calling this API will cause the SDK to stop receiving the user's video stream and release the decoding and rendering resources for the stream.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
D of the specified remote user |
streamType |
TRTCVideoStreamType |
required
Video stream type of the
|
updateRemoteView(userId, view, streamType)
Update the HTML element used to hold remote user video
This interface can be used in cases where you want to change the place where you preview remote user camera or screen-sharing video on the HTML page.
Notice: This interface should be called after calling startRemoteView()
, or else it will not work.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
Remote user ID |
view |
HTMLElement | null |
required
An HTML element to hold the video from remote user.
|
streamType |
TRTCVideoStreamType |
required
Video stream type |
stopAllRemoteView()
Stop subscribing to all remote users' video streams and release all rendering resources
Calling this API will cause the SDK to stop receiving all remote video streams and release all decoding and rendering resources. Notice: If a substream image (screen sharing) is being displayed, it will also be stopped.
muteRemoteVideoStream(userId, mute, streamType)
Pause/Resume subscribing to remote user's video stream
This API only pauses/resumes receiving the specified user's video stream but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called. Notice: This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
ID of the specified remote user |
mute |
Boolean |
required
Whether to pause receiving |
streamType |
TRTCVideoStreamType |
required
Video stream type |
muteAllRemoteVideoStreams(mute)
Pause/Resume subscribing to all remote users' video streams
This API only pauses/resumes receiving all users' video streams but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called. Notice: This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
Whether to pause receiving |
setVideoEncoderParam(params)
Set the encoding parameters of video encoder
This setting can determine the quality of image viewed by remote users, which is also the image quality of on-cloud recording files.
Parameters:
Name | Type | Description | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCVideoEncParam |
required
Video encoding parameters Properties
|
setNetworkQosParam(params)
Set network quality control parameters
This setting determines the quality control policy in a poor network environment, such as "image quality preferred" or "smoothness preferred".
Parameters:
Name | Type | Description | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCNetworkQosParam |
required
Parameters for network quality control Properties
|
setLocalRenderParams(params)
Set the rendering parameters of local video image
The parameters that can be set include video image rotation angle, fill mode, and mirror mode.
Parameters:
Name | Type | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCRenderParams |
required
Video image rendering parameters. Properties
|
setLocalViewFillMode(mode)
Set the rendering mode of the local image (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `setLocalRenderParams` instead.
Parameters:
Name | Type | Description |
---|---|---|
mode |
TRTCVideoFillMode |
required
Fill (the image may be stretched or cropped) or fit (there may be black bars). Default value: TRTCVideoFillMode_Fit.
|
setRemoteRenderParams(userId, streamType, params)
Set the rendering mode of remote video image
Parameters:
Name | Type | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
userId |
String |
required
ID of the specified remote user |
||||||||||||
streamType |
TRTCVideoStreamType |
required
It can be set to the primary stream image (TRTCVideoStreamTypeBig) or substream image (TRTCVideoStreamTypeSub). |
||||||||||||
params |
TRTCRenderParams |
required
Video image rendering parameters. Properties
|
setRemoteViewFillMode(userID, mode)
Set the rendering mode of a remote image (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `setRemoteRenderParams` instead.
Parameters:
Name | Type | Description |
---|---|---|
userID |
String |
required
The ID of the remote user. |
mode |
TRTCVideoFillMode |
required
Fill (the image may be stretched or cropped) or fit (there may be black bars). Default value: TRTCVideoFillMode_Fit.
|
setLocalViewRotation(rotation)
Set the rotation of the local image (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `setLocalRenderParams` instead.
Parameters:
Name | Type | Description |
---|---|---|
rotation |
TRTCVideoRotation |
required
Valid values: TRTCVideoRotation90, TRTCVideoRotation180, TRTCVideoRotation270, TRTCVideoRotation0 (default).
|
setRemoteViewRotation(userID, rotation)
Set the rotation of a remote image (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `setRemoteRenderParams` instead.
Parameters:
Name | Type | Description |
---|---|---|
userID |
String |
required
The ID of the remote user. |
rotation |
TRTCVideoRotation |
required
Valid values: TRTCVideoRotation90, TRTCVideoRotation180, TRTCVideoRotation270, TRTCVideoRotation0 (default).
|
setVideoEncoderRotation(rotation)
Set the direction of image output by video encoder
This setting does not affect the preview direction of the local video image, but affects the direction of the image viewed by other users in the room (and on-cloud recording files).
When a phone or tablet is rotated upside down, as the capturing direction of the camera does not change, the video image viewed by other users in the room will become upside-down.
In this case, you can call this API to rotate the image encoded by the SDK 180 degrees, so that other users in the room can view the image in the normal direction.
If you want to achieve the aforementioned user-friendly interactive experience, we recommend you directly call setGSensorMode
to implement smarter direction adaptation, with no need to call this API manually.
Parameters:
Name | Type | Description |
---|---|---|
rotation |
TRTCVideoRotation |
required
Currently, rotation angles of 0 and 180 degrees are supported. Default value: TRTCVideoRotation0 (no rotation)
|
setLocalViewMirror(mirror)
Turn on/off the mirror mode for the local camera preview (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `setLocalRenderParams` instead.
Parameters:
Name | Type | Description |
---|---|---|
mirror |
Boolean |
required
Whether to turn on the mirror mode. Default value for Windows: false (off); default value for macOS: true (on). |
setVideoEncoderMirror(mirror)
Set the mirror mode of image output by encoder
This setting does not affect the mirror mode of the local video image, but affects the mirror mode of the image viewed by other users in the room (and on-cloud recording files).
Parameters:
Name | Type | Description |
---|---|---|
mirror |
Boolean |
required
Whether to enable remote mirror mode. true: yes; false: no. Default value: false |
enableSmallVideoStream(enable, params)
Enable dual-channel encoding mode with big and small images
In this mode, the current user's encoder will output two channels of video streams, i.e., HD big image and Smooth small image, at the same time (only one channel of audio stream will be output though). In this way, other users in the room can choose to subscribe to the HD big image or Smooth small image according to their own network conditions or screen size.
Notice: Dual-channel encoding will consume more CPU resources and network bandwidth; therefore, this feature can be enabled on macOS, Windows, or high-spec tablets, but is not recommended for phones.
Parameters:
Name | Type | Description | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
enable |
Boolean |
required
Whether to enable small image encoding. Default value: false |
||||||||||||||||||
params |
TRTCVideoEncParam |
required
Video parameters of small image stream Properties
|
setRemoteVideoStreamType(userId, type)
Switch the big/small image of specified remote user
After an anchor in a room enables dual-channel encoding, the video image that other users in the room subscribe to through startRemoteView
will be HD big image by default.
You can use this API to select whether the image subscribed to is the big image or small image. The API can take effect before or after startRemoteView
is called.
Notice: To implement this feature, the target user must have enabled the dual-channel encoding mode through enableEncSmallVideoStream
; otherwise, this API will not work.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
ID of the specified remote user |
type |
TRTCVideoStreamType |
required
Video stream type, i.e., big image or small image. Default value: HD big image
|
snapshotVideo(userId, streamType)
Video snapshot
You can use this API to take snapshot of the local video image or the primary stream image and substream (screen sharing) image of a remote user.
After calling this API, an onSnapshotComplete
event will be emitted.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
User ID. A null value indicates to take snapshot of the local video. |
streamType |
TRTCVideoStreamType |
required
Video stream type, which can be the primary stream image ( |
setPriorRemoteVideoStreamType(type)
Set video references for playback (deprecated)
For low-end devices, we recommend you set the parameter to TRTCVideoStreamTypeSmall
(small image).
This API will not work if a remote user hasn’t enabled the dual-stream mode.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use `startRemoteView` instead.
Parameters:
Name | Type | Description |
---|---|---|
type |
TRTCVideoStreamType |
required
Whether to play the big or small image by default. Default value: TRTCVideoStreamTypeBig.
|
startLocalRecording(options)
Start local media recording
This API records the audio/video content during live streaming into a local file.
Parameters:
Name | Type | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
options |
Object |
required
Recording parameter. Properties
|
stopLocalRecording()
Stop local media recording
If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.
startLocalAudio(quality)
Enable local audio capturing and publishing
The SDK does not enable the microphone by default. When a user wants to publish the local audio, the user needs to call this API to enable microphone capturing and encode and publish the audio to the current room.
After local audio capturing and publishing is enabled, other users in the room will receive the onUserAudioAvailable
(userId, 1) notification.
Parameters:
Name | Type | Description |
---|---|---|
quality |
TRTCAudioQuality |
required
Sound quality
|
stopLocalAudio()
Stop local audio capturing and publishing
After local audio capturing and publishing is stopped, other users in the room will receive the onUserAudioAvailable
(userId, false) event notification.
muteLocalAudio(mute)
Pause/Resume publishing local audio stream
After local audio publishing is paused, other users in the room will receive the onUserAudioAvailable
(userId, false) notification.
After local audio publishing is resumed, other users in the room will receive the onUserAudioAvailable
(userId, true) notification.
Different from stopLocalAudio
, muteLocalAudio(true)
does not release the mic permission; instead, it continues to send mute packets with extremely low bitrate.
This is very suitable for scenarios that require on-cloud recording, as video file formats such as MP4 have a high requirement for audio continuity, while an MP4 recording file cannot be played back smoothly if stopLocalAudio
is used.
Therefore, muteLocalAudio
instead of stopLocalAudio
is recommended in scenarios where the requirement for recording file quality is high.
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
true: mute; false: unmute, default value: false |
muteRemoteAudio(userId, mute)
Pause/Resume playing back remote audio stream
When you mute the remote audio of a specified user, the SDK will stop playing back the user's audio and pulling the user's audio data.
Notice: This API works when called either before or after room entry (enterRoom), and the mute status will be reset to false
after room exit (exitRoom).
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
ID of the specified remote user |
mute |
Boolean |
required
true: mute; false: unmute |
muteAllRemoteAudio(mute)
Pause/Resume playing back all remote users' audio streams
When you mute the audio of all remote users, the SDK will stop playing back all their audio streams and pulling all their audio data.
Notice: This API works when called either before or after room entry (enterRoom), and the mute status will be reset to false
after room exit (exitRoom).
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
true: mute; false: unmute |
setRemoteAudioVolume(userId, volume)
Set the audio playback volume of remote user
You can mute the audio of a remote user through setRemoteAudioVolume(userId, 0)
.
Notice: If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
ID of the specified remote user |
volume |
Number |
required
Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
setAudioCaptureVolume(volume)
Set the capturing volume of local audio
Notice: If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
getAudioCaptureVolume() → {Number}
Get the capturing volume of local audio
Returns:
- Capture volume
- Type
- Number
setAudioPlayoutVolume(volume)
Set the playback volume of remote audio
This API controls the volume of the sound ultimately delivered by the SDK to the system for playback. It affects the volume of the recorded local audio file but not the volume of in-ear monitoring.
Notice: If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
getAudioPlayoutVolume() → {Number}
Get the playback volume of remote audio
Returns:
- Capture volume
- Type
- Number
enableAudioVolumeEvaluation(interval)
Enable volume reminder
After this feature is enabled, the SDK will return the remote audio volume in the onUserVoiceVolume
event notification.
Parameters:
Name | Type | Description |
---|---|---|
interval |
Number |
required
Set the interval in ms for emitting the |
startAudioRecording(params) → {Number}
Start audio recording
After this API is called, the SDK will record all audios of a call, including the local audio, remote audios, background music, and audio effects, into a file.
This API works both before and after room entry. Recording will stop automatically after room exit, even if stopAudioRecording
is not called.
Notes:
- The path must contain the filename and extension. The extension determines the format of the recording file. Supported formats include PCM, WAV, and AAC.
For example, if you set the path to
mypath/record/audio.aac
, the SDK will generate an audio recording file in AAC format. Please specify a valid path with read/write permissions; otherwise, the system will fail to generate audio recording files. - In versions earlier than 9.3,
params
(required), which indicates the path to save recording files, must be a string. In v9.3 and later versions, the parameter can be a string orTRTCAudioRecordingParams
.
Parameters:
Name | Type | Description | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCAudioRecordingParams | String |
required
Audio recording parameters. Properties
|
Returns:
0: successful; -1: Audio recording has started; -2: Failed to create the file or directory; -3: The audio format specified is not supported.
- Type
- Number
stopAudioRecording()
Stop audio recording
If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.
setRemoteAudioParallelParams(param)
Set up an intelligent concurrent playback strategy for remote audio streams
For room with many speakers.
Parameters:
Name | Type | Description |
---|---|---|
param |
TRTCAudioParallelParams |
required
Audio parallel parameter. |
setAudioQuality(quality)
Set audio quality
Notice:
- The higher sound quality will bring better listening experience, but need more bandwidth. It will be more likely to get stuck in weak network scenario.
Parameters:
Name | Type | Description |
---|---|---|
quality |
TRTCAudioQuality |
required
Sound quality
|
setMicVolumeOnMixing(volume)
Set microphone volume (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 6.9. Please use
setAudioCaptureVolume
instead。
- This API has been deprecated since TRTC SDK 6.9. Please use
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. Value range: 0 - 200; default: 100 |
getCameraDevicesList() → {Array.<TRTCDeviceInfo>}
Get the list of cameras
Example
var cameralist = rtcCloud.getCameraDevicesList();
for (i=0;i<cameralist.length;i++) {
var camera = cameralist[i];
console.info("camera deviceName: " + camera.deviceName + " deviceId:" + camera.deviceId);
}
Returns:
- Camera List
- Type
- Array.<TRTCDeviceInfo>
setCurrentCameraDevice(deviceId)
Set the camera to be used
Parameters:
Name | Type | Description |
---|---|---|
deviceId |
String |
required
|
getCurrentCameraDevice() → {TRTCDeviceInfo}
Get the camera currently in use
Returns:
Camera device information
- Type
- TRTCDeviceInfo
getMicDevicesList() → {Array.<TRTCDeviceInfo>}
Get the list of microphones
Example
var miclist = rtcCloud.getMicDevicesList();
for (i=0;i<miclist.length;i++) {
var mic = miclist[i];
console.info("mic deviceName: " + mic.deviceName + " deviceId:" + mic.deviceId);
}
Returns:
microphone device list
- Type
- Array.<TRTCDeviceInfo>
getCurrentMicDevice() → {TRTCDeviceInfo}
Get the microphone currently in use
Returns:
device information with device ID and name
- Type
- TRTCDeviceInfo
setCurrentMicDevice(micId)
Set the mic to use
This API is used to set the mic to use. If you do not call this API, the mic whose index is 0 that returned from getMicDevicesList
will be used.
Parameters:
Name | Type | Description |
---|---|---|
micId |
String |
required
The ID of the mic to use. You can call |
getCurrentMicDeviceVolume() → {Number}
Get the current mic volume
This API is used to get the capturing volume of the mic. Note: This API returns the audio volume of the hardware.
Returns:
: The mic volume. Value range: 0-100.
- Type
- Number
setCurrentMicDeviceVolume(:)
Set the current mic volume
This API is used to set the capturing volume of the mic. Note: This API sets the system capturing volume. If the user adjusts the system capturing volume manually, the volume set by the API will be overwritten.
Parameters:
Name | Type | Description |
---|---|---|
: |
Number |
required
The volume to use. Value range: 0-100. |
setCurrentMicDeviceMute(mute)
Set the mute status of the microphone currently in use
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
true: mute, false: unmute |
getCurrentMicDeviceMute() → {Boolean}
Get the mute status of the microphone currently in use
Returns:
Mute state
- Type
- Boolean
getSpeakerDevicesList() → {Array.<TRTCDeviceInfo>}
Get the list of speakers
Example
var speakerlist = rtcCloud.getSpeakerDevicesList();
for (i=0;i<speakerlist.length;i++) {
var speaker = speakerlist[i];
console.info("mic deviceName: " + speaker.deviceName + " deviceId:" + speaker.deviceId);
}
Returns:
Speaker device list
- Type
- Array.<TRTCDeviceInfo>
getCurrentSpeakerDevice() → {TRTCDeviceInfo}
Get the speaker currently in use
Returns:
device information with device ID and name
- Type
- TRTCDeviceInfo
setCurrentSpeakerDevice(speakerId)
Set the speaker currently in use
Parameters:
Name | Type | Description |
---|---|---|
speakerId |
String |
required
|
getCurrentSpeakerVolume() → {Number}
Get the current speaker volume
This API is used to get the playback volume of the speaker.
Returns:
: The speaker volume. Value range: 0-100.
- Type
- Number
setCurrentSpeakerVolume(:)
Set the current speaker volume
This API is used to set the playback volume of the speaker.
Note: This API sets the system playback volume. If the user adjusts the system playback volume manually, the volume set by the API will be overwritten.
Parameters:
Name | Type | Description |
---|---|---|
: |
Number |
required
The volume to use. Value range: 0-100. |
setCurrentSpeakerDeviceMute(mute)
Set the mute status of the speaker currently in use
Parameters:
Name | Type | Description |
---|---|---|
mute |
Boolean |
required
true: mute, false: unmute |
getCurrentSpeakerDeviceMute() → {Boolean}
Get the mute status of the speaker currently in use
Returns:
true: muted, false: unmuted
- Type
- Boolean
enableFollowingDefaultAudioDevice(deviceType, enable)
Set the audio device used by SDK to follow the system default device
Support microphone and speaker, camera not supported.
Parameters:
Name | Type | Description |
---|---|---|
deviceType |
TRTCDeviceType |
required
Device type |
enable |
Boolean |
required
Whether to follow the system default audio device
|
setBeautyStyle(style, beauty, white, ruddiness)
Set special effects such as beauty, brightening, and rosy skin filters
The SDK is integrated with two skin smoothing algorithms of different styles:
- "Smooth" style, which uses a more radical algorithm for more obvious effect and is suitable for show live streaming.
- "Natural" style, which retains more facial details for more natural effect and is suitable for most live streaming use cases.
Note: The computer must be equipped with a graphics card, otherwise the function will not take effect.
Parameters:
Name | Type | Description |
---|---|---|
style |
TRTCBeautyStyle |
required
Skin smoothening algorithm ("smooth" or "natural")
|
beauty |
Number |
required
Strength of the beauty filter. Value range: 0–9; 0 indicates that the filter is disabled, and the greater the value, the more obvious the effect. |
white |
Number |
required
Strength of the brightening filter. Value range: 0–9; 0 indicates that the filter is disabled, and the greater the value, the more obvious the effect. |
ruddiness |
Number |
required
Strength of the rosy skin filter. Value range: 0–9; 0 indicates that the filter is disabled, and the greater the value, the more obvious the effect. |
setWaterMark(streamType, srcData, srcType, nWidth, nHeight, xOffset, yOffset, fWidthRatio)
Add watermark
The watermark position is determined by the xOffset
, yOffset
, and fWidthRatio
parameters.
xOffset
: X coordinate of watermark, which is a floating-point number between 0 and 1.yOffset
: Y coordinate of watermark, which is a floating-point number between 0 and 1.fWidthRatio
: watermark dimensions ratio, which is a floating-point number between 0 and 1.
Parameters:
Name | Type | Description |
---|---|---|
streamType |
TRTCVideoStreamType |
required
Stream type of the watermark to be set ( |
srcData |
ArrayBuffer | String |
required
Source data of watermark image (if |
srcType |
TRTCWaterMarkSrcType |
required
Source data type of watermark image
|
nWidth |
Number |
required
Pixel width of watermark image (this parameter will be ignored if the source data is a file path) |
nHeight |
Number |
required
Pixel height of watermark image (this parameter will be ignored if the source data is a file path) |
xOffset |
Number |
required
Top-left offset on the X axis of watermark |
yOffset |
Number |
required
Top-left offset on the Y axis of watermark |
fWidthRatio |
Number |
required
Ratio of watermark width to image width (the watermark will be scaled according to this parameter) |
startRemoteSubStreamView(userId, view)
Start displaying the substream image of remote user (deprecated)
startRemoteView
used to display big stream image (TRTCVideoStreamTypeBig, commonly used for camera).- This API is used to display substream image (TRTCVideoStreamTypeSub, commonly used for screen sharing).
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
startRemoteView
instead. Notice: This API should be called afteronUserSubStreamAvailable
event notification.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
remote user ID |
view |
HTMLElement |
required
HTML Element where to display substream image |
stopRemoteSubStreamView(userId)
Stop displaying the substream image of remote user (deprecated)
Substream (TRTCVideoStreamTypeSub, commonly used for screen sharing).
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
stopRemoteView
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
remote user ID |
setRemoteSubStreamViewFillMode(userId, mode)
Set the fill mode of substream image (deprecated)
setRemoteViewFillMode
used to set big image fill-mode (TRTCVideoStreamTypeBig, commonly used for camera).- This API is used to set substream fill-mode (TRTCVideoStreamTypeSub, commonly used for screen sharing).
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setRemoteRenderParams
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
remote user ID |
mode |
TRTCVideoFillMode |
required
Video image fill mode. Default value: TRTCVideoFillMode_Fit
|
setRemoteSubStreamViewRotation(userId, rotation)
Set the clockwise rotation angle of substream image (deprecated)
TRTCCloud#setRemoteViewRotation
used to set the clockwise rotation angle of big stream (TRTCVideoStreamTypeBig, commonly used for camera)- This API is used to set the clockwise rotation angle of substream image (TRTCVideoStreamTypeSub, commonly used for screen sharing)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setRemoteRenderParams
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
userId |
String |
required
remote user ID |
rotation |
TRTCVideoRotation |
required
supported angle: 90, 180, 270 |
getScreenCaptureSources(thumbWidth, thumbHeight, iconWidth, iconHeight) → {Array.<TRTCScreenCaptureSourceInfo>}
Enumerate shareable screens and windows
When you integrate the screen sharing feature of a desktop system, you generally need to display a UI for selecting the sharing target, so that users can use the UI to choose whether to share the entire screen or a certain window. Through this API, you can query the IDs, names, and thumbnails of sharable windows on the current system. We provide a default UI implementation in the demo for your reference. Notice: The returned list contains the screen and the application windows. The screen is the first element in the list. If the user has multiple displays, then each display is a sharing target.
Parameters:
Name | Type | Description |
---|---|---|
thumbWidth |
Number |
required
Specify the thumbnail width of the window to be obtained. The thumbnail can be drawn on the window selection UI. |
thumbHeight |
Number |
required
Specify the thumbnail height of the window to be obtained. The thumbnail can be drawn on the window selection UI. |
iconWidth |
Number |
required
Specify the icon width of the window to be obtained. |
iconHeight |
Number |
required
Specify the icon height of the window to be obtained. |
Returns:
List of windows (including the screen)
- Type
- Array.<TRTCScreenCaptureSourceInfo>
selectScreenCaptureTarget(source, captureRect, property)
Select the screen or window to share
After you get the sharable screens and windows through getScreenCaptureSources
, you can call this API to select the target screen or window you want to share.
During the screen sharing process, you can also call this API at any time to switch the sharing target.
The following four sharing modes are supported:
- Sharing the entire screen: for
source
whosetype
isScreen
insourceInfoList
, setcaptureRect
to{ 0, 0, 0, 0 }
. - Sharing a specified area: for
source
whosetype
isScreen
insourceInfoList
, setcaptureRect
to a non-nullptr value, e.g.,{ 100, 100, 300, 300 }
. - Sharing an entire window: for
source
whosetype
isWindow
insourceInfoList
, setcaptureRect
to{ 0, 0, 0, 0 }
. - Sharing a specified window area: for
source
whosetype
isWindow
insourceInfoList
, setcaptureRect
to a non-nullptr value, e.g.,{ 100, 100, 300, 300 }
.
Notice: Due to the capability of Operating System API, When you switch sharing windows created from the same application for the second time and on, the selected window may not be popped on the top all other windows. You need to bring it top manually.
Examples
// Example 1: Select the screen or window to share
import TRTCCloud, {
Rect,
TRTCScreenCaptureProperty
} from 'trtc-electron-sdk';
const rtcCloud = new TRTCCloud();
const screenAndWindows = rtcCloud.getScreenCaptureSources(320, 180, 32, 32);
const selectedScreenOrWindow = screenAndWindows[0];
const selectRect = new Rect(0, 0, 0, 0);
const captureProperty = new TRTCScreenCaptureProperty(
true, // enable capture mouse
true, // enable highlight
true, // enable high performance
0xFF66FF, // highlight color
8, // highlight width
false // disable capture child window
);
rtcCloud.selectScreenCaptureTarget(
selectedScreenOrWindow,
selectRect,
captureProperty
);
// Example 2: Select the screen or window to share, deprecated invocation way
import TRTCCloud, { Rect } from 'trtc-electron-sdk';
const rtcCloud = new TRTCCloud();
const screenAndWindows = rtcCloud.getScreenCaptureSources(320, 180, 32, 32);
const selectedScreenOrWindow = screenAndWindows[0];
const selectRect = new Rect(0, 0, 0, 0);
rtcCloud.selectScreenCaptureTarget(
selectedScreenOrWindow.type,
selectedScreenOrWindow.sourceId,
selectedScreenOrWindow.sourceName,
selectRect,
true, // enable capture mouse
true // enable highlight
);
Parameters:
Name | Type | Description | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
source |
TRTCScreenCaptureSourceInfo |
required
Specify sharing source. For more information, please see the definition of Properties
|
|||||||||||||||
captureRect |
Rect |
required
Specify the area to be captured |
|||||||||||||||
property |
TRTCScreenCaptureProperty |
required
Specify the attributes of the screen sharing target, such as capturing the cursor and highlighting the captured window. For more information, please see the definition of |
startScreenCapture(view, type, params)
Start desktop screen sharing
This API can capture the screen content of the entire macOS system or a specified application and share it with other users in the same room.
Notice:
- A user can publish at most one primary stream (
TRTCVideoStreamTypeBig
) and one substream (TRTCVideoStreamTypeSub
) at the same time. - By default, screen sharing uses the substream image. If you want to use the primary stream for screen sharing, you need to stop camera capturing (through
stopLocalPreview
) in advance to avoid conflicts. - Only one user can use the substream for screen sharing in the same room at any time; that is, only one user is allowed to enable the substream in the same room at any time.
- When there is already a user in the room using the substream for screen sharing, calling this API will receive the
onError(ERR_SERVER_CENTER_ANOTHER_USER_PUSH_SUB_VIDEO)
event notification.
Example
// Share selected screen or window
import TRTCCloud, {
TRTCVideoStreamType,
TRTCVideoEncParam,
TRTCVideoResolution,
TRTCVideoResolutionMode
} from 'trtc-electron-sdk';
const rtcCloud = new TRTCCloud();
const screenAndWindows = rtcCloud.getScreenCaptureSources(320, 180, 32, 32);
const selectedScreenOrWindow = screenAndWindows[0];
const selectRect = new Rect(0, 0, 0, 0);
const captureProperty = new TRTCScreenCaptureProperty(
true, // enable capture mouse
true, // enable highlight
true, // enable high performance
0, // default highlight color
0, // default highlight width
false // disable capture child window
);
rtcCloud.selectScreenCaptureTarget(
selectedScreenOrWindow,
selectRect,
captureProperty
);
const screenShareEncParam = new TRTCVideoEncParam(
TRTCVideoResolution.TRTCVideoResolution_1280_720,
TRTCVideoResolutionMode.TRTCVideoResolutionModeLandscape,
15,
1600,
0,
true,
);
rtcCloud.startScreenCapture(
view, // HTML Element
TRTCVideoStreamType.TRTCVideoStreamTypeSub,
screenShareEncParam,
);
Parameters:
Name | Type | Default | Description |
---|---|---|---|
view |
HTMLElement |
null
|
required
HTML Element where to preview the screen sharing effect |
type |
TRTCVideoStreamType |
required
Channel used for screen sharing, which can be the primary stream ( |
|
params |
TRTCVideoEncParam |
null
|
required
Image encoding parameters used for screen sharing, which can be set to |
pauseScreenCapture()
Pause screen sharing
resumeScreenCapture()
Resume screen sharing
stopScreenCapture()
Stop screen sharing
setSubStreamEncoderParam(params)
Set the video encoding parameters of screen sharing (i.e., substream)
This API can set the image quality of screen sharing (i.e., the substream) viewed by remote users, which is also the image quality of screen sharing in on-cloud recording files. Please note the differences between the following two APIs:
setVideoEncoderParam
is used to set the video encoding parameters of the primary stream image (TRTCVideoStreamTypeBig
, generally for camera).setSubStreamEncoderParam
is used to set the video encoding parameters of the substream image (TRTCVideoStreamTypeSub
, generally for screen sharing).
Notice:
Even if you use the primary stream to transfer screen sharing data (set type=TRTCVideoStreamTypeBig
when calling startScreenCapture
), you still need to call the setSubStreamEncoderParam
API instead of the setVideoEncoderParam
API to set the screen sharing encoding parameters.
Parameters:
Name | Type | Description | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCVideoEncParam |
required
Substream encoding parameters Properties
|
setSubStreamMixVolume(volume)
Set the audio mixing volume of screen sharing
The greater the value, the larger the ratio of the screen sharing volume to the mic volume. We recommend you not set a high value for this parameter as a high volume will cover the mic sound.
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Set audio mixing volume. Value range: 0–100 |
addExcludedShareWindow(win)
Add specified windows to the exclusion list of screen sharing
The excluded windows will not be shared. This feature is generally used to add a certain application's window to the exclusion list to avoid privacy issues. You can set the filtered windows before starting screen sharing or dynamically add the filtered windows during screen sharing.
Notice:
- This API takes effect only if the
type
inTRTCScreenCaptureSourceInfo
is specified asTRTCScreenCaptureSourceTypeScreen
; that is, the feature of excluding specified windows works only when the entire screen is shared. - The windows added to the exclusion list through this API will be automatically cleared by the SDK after room exit.
- On macOS, please pass in the window ID (CGWindowID), which can be obtained through the
sourceId
member inTRTCScreenCaptureSourceInfo
.
Parameters:
Name | Type | Description |
---|---|---|
win |
String |
required
Window not to be shared |
removeExcludedShareWindow(win)
Remove specified windows from the exclusion list of screen sharing
Parameters:
Name | Type | Description |
---|---|---|
win |
String |
required
Window to be removed from exclusion list |
removeAllExcludedShareWindow()
Remove all windows from the exclusion list of screen sharing
addIncludedShareWindow(win)
Add specified windows to the inclusion list of screen sharing
This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow; that is, the feature of additionally including specified windows works only when a window is shared.
You can call it before or after startScreenCapture.
Notice: The windows added to the inclusion list by this method will be automatically cleared by the SDK after room exit.
Parameters:
Name | Type | Description |
---|---|---|
win |
String |
required
Window ID to be shared |
removeIncludedShareWindow(win)
Remove specified windows from the inclusion list of screen sharing
This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow.
That is, the feature of additionally including specified windows works only when a window is shared.
Parameters:
Name | Type | Description |
---|---|---|
win |
String |
required
Window ID to be shared |
removeAllIncludedShareWindow()
Remove all windows from the inclusion list of screen sharing
This API takes effect only if the type in TRTCScreenCaptureSourceInfo is specified as TRTCScreenCaptureSourceTypeWindow.
That is, the feature of additionally including specified windows works only when a window is shared.
enableCustomAudioCapture(enable)
Enable custom audio capturing mode
After this mode is enabled, the SDK will not run the original audio capturing process (i.e., stopping mic data capturing) and will retain only the audio encoding and sending capabilities.
You need to use sendCustomAudioData
to continuously insert the captured audio data into the SDK.
Notice:
- Custom audio capture and the default microphone audio capture in the SDK are mutually exclusive. Therefore, before calling
enableCustomAudioCapture(true)
to enable custom audio capture, you need to call thestopLocalAudio
interface to turn off the default microphone audio upload, otherwise it will not take effect. After callingenableCustomAudioCapture(false)
to turn off custom capture, you need to call thestopLocalAudio
interface to enable the SDK's default microphone audio capture. - As acoustic echo cancellation (AEC) requires strict control over the audio capturing and playback time, after custom audio capturing is enabled, AEC may fail.
Parameters:
Name | Type | Description |
---|---|---|
enable |
Boolean |
required
是否启用,默认值:false。 |
sendCustomAudioData(frame)
Deliver captured audio data to SDK
We recommend you enter the following information for the TRTCAudioFrame
parameter (other fields can be left empty):
- audioFormat: audio data format, which can only be
TRTCAudioFrameFormatPCM
. - data: audio frame buffer. Audio frame data must be in PCM format, and it supports a frame length of 5–100 ms (20 ms is recommended). Length calculation method: for example, if the sample rate is 48000, then the frame length for mono channel will be
48000 * 0.02s * 1 * 16 bit = 15360 bit = 1920 bytes
. - sampleRate: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000.
- channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel.
- timestamp (ms): Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting a audio frame.
Notice: Please call this API accurately at intervals of the frame length; otherwise, sound lag may occur due to uneven data delivery intervals.
Parameters:
Name | Type | Description |
---|---|---|
frame |
TRTCAudioFrame |
required
Audio data |
enableMixExternalAudioFrame(enablePublish, enablePlayout)
Enable/Disable custom audio track
After this feature is enabled, you can mix a custom audio track into the SDK through this API. With two boolean parameters, you can control whether to play back this track remotely or locally.
Notice: If you specify both enablePublish and enablePlayout as false , the custom audio track will be completely closed.
Parameters:
Name | Type | Description |
---|---|---|
enablePublish |
Boolean |
required
Whether the mixed audio track should be played back locally. Default value: false |
enablePlayout |
Boolean |
required
Whether the mixed audio track should be played back remotely. Default value: false |
mixExternalAudioFrame(frame) → {Number}
Mix custom audio track into SDK
Before you use this API to mix custom PCM audio into the SDK, you need to first enable custom audio tracks through enableMixExternalAudioFrame
.
You are expected to feed audio data into the SDK at an even pace, but we understand that it can be challenging to call an API at absolutely regular intervals.
Given this, we have provided a buffer pool in the SDK, which can cache the audio data you pass in to reduce the fluctuations in intervals between API calls.
The value returned by this API indicates the size (ms) of the buffer pool. For example, if 50 is returned, it indicates that the buffer pool has 50 ms of audio data. As long as you call this API again within 50 ms, the SDK can make sure that continuous audio data is mixed.
If the value returned is 100 or greater, you can wait after an audio frame is played to call the API again. If the value returned is smaller than 100 , then there isn’t enough data in the buffer pool, and you should feed more audio data into the SDK until the data in the buffer pool is above the safety level.
Fill the fields in TRTCAudioFrame as follows (other fields are not required):
- data : audio frame buffer. Audio frames must be in PCM format. Each frame can be 5-100 ms (20 ms is recommended) in duration. Assume that the sample rate is 48000, and sound channels mono-channel. Then the frame size would be 48000 x 0.02s x 1 x 16 bit = 15360 bit = 1920 bytes.
- sampleRate : sample rate. Valid values: 16000, 24000, 32000, 44100, 48000
- channel : number of sound channels (if dual-channel is used, data is interleaved). Valid values: 1 (mono-channel); 2 (dual channel)
- timestamp : timestamp (ms). Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting an audio frame.
Notice:
- To mix in custom audio tracks, there needs to be an upstream audio stream driver. Supported upstream audio streams include:
- The default microphone audio stream captured by the SDK, which can be enabled by calling the
startLocalAudio
interface.
- The default microphone audio stream captured by the SDK, which can be enabled by calling the
- Please call this interface accurately at the interval of each frame duration. Uneven data delivery intervals can easily cause sound stuttering.
Parameters:
Name | Type | Description |
---|---|---|
frame |
TRTCAudioFrame |
required
Audio data |
Returns:
- Audio buffer duration, unit: ms. Less than 0 means an error (-1 means mixExternalAudioFrame is not enabled).
- Type
- Number
setMixExternalAudioVolume(publishVolume, playoutVolume)
Set the publish volume and playback volume of mixed custom audio track
Parameters:
Name | Type | Description |
---|---|---|
publishVolume |
Number |
required
set the play volume,from 0 to 100, -1 means no change |
playoutVolume |
Number |
required
set the publish volume,from 0 to 100, -1 means no change |
generateCustomPTS() → {Number}
Generate custom capturing timestamp
This API is only suitable for the custom capturing mode and is used to solve the problem of out-of-sync audio/video caused by the inconsistency between the capturing time and delivery time of audio/video frames.
When you call APIs such as sendCustomAudioData
for custom video or audio capturing, please use this API as instructed below:
- First, when a video or audio frame is captured, call this API to get the corresponding PTS timestamp.
- Then, send the video or audio frame to the preprocessing module you use (such as a third-party beauty filter or sound effect component).
- When you actually call
sendCustomAudioData
for delivery, assign the PTS timestamp recorded when the frame was captured to the timestamp field inTRTCVideoFrame
orTRTCAudioFrame
.
Returns:
- Timestamp, unit: ms.
- Type
- Number
setAudioFrameCallback(callback)
Set custom audio data callback
After this callback is set, the SDK will internally call back the audio data (in PCM format), including: {onCapturedAudioFrame}:callback of the audio data captured by the local mic
- {onLocalProcessedAudioFrame}:callback of the audio data captured by the local mic and preprocessed by the audio module
- {onPlayAudioFrame}:audio data from each remote user before audio mixing
- {onMixedPlayAudioFrame}:callback of the audio data that will be played back by the system after audio streams are mixed
- {onMixedAllAudioFrame}:Data mixed from all the captured and to-be-played audio in the SDK
Note: Setting the callback to null indicates to stop the custom audio callback, while setting it to a non-null value indicates to start the custom audio callback.
Example
import TRTCCloud from 'trtc-electron-sdk';
const rtcCloud = TRTCCloud.getTRTCShareInstance();
onCapturedAudioFrame(frame: TRTCAudioFrame) {}
onLocalProcessedAudioFrame(frame: TRTCAudioFrame) {}
onPlayAudioFrame(frame: TRTCAudioFrame, userId: string) {}
onMixedPlayAudioFrame(frame: TRTCAudioFrame) {}
onMixedAllAudioFrame(frame: TRTCAudioFrame) {}
// set custom audio data callback
rtcCloud.setAudioFrameCallback({
onCapturedAudioFrame: onCapturedAudioFrame,
onLocalProcessedAudioFrame: onLocalProcessedAudioFrame,
onPlayAudioFrame: onPlayAudioFrame,
onMixedPlayAudioFrame: onMixedPlayAudioFrame,
onMixedAllAudioFrame: onMixedAllAudioFrame,
});
// cancel custom audio data callback
rtcCloud.setAudioFrameCallback({
onCapturedAudioFrame: null,
onLocalProcessedAudioFrame: null,
onPlayAudioFrame: null,
onMixedPlayAudioFrame: null,
onMixedAllAudioFrame: null,
});
Parameters:
Name | Type | Description |
---|---|---|
callback |
TRTCAudioFrameCallback |
required
Callback of custom audio processing. |
sendCustomCmdMsg(cmdId, msg, reliable, ordered) → {Boolean}
Use UDP channel to send custom message to all users in room
This API allows you to use TRTC's UDP channel to broadcast custom data to other users in the current room for signaling transfer.
The UDP channel in TRTC was originally designed to transfer audio/video data. This API works by disguising the signaling data you want to send as audio/video data packets and sending them together with the audio/video data to be sent.
Other users in the room can receive the message through the onRecvCustomCmdMsg
event.
Notice:
- Up to 30 messages can be sent per second to all users in the room (this is not supported for web and mini program currently).
- A packet can contain up to 1 KB of data; if the threshold is exceeded, the packet is very likely to be discarded by the intermediate router or server.
- A client can send up to 8 KB of data in total per second.
reliable
andordered
must be set to the same value (true
orfalse
) and cannot be set to different values currently.- We strongly recommend you set different
cmdID
values for messages of different types. This can reduce message delay when orderly sending is required.
Parameters:
Name | Type | Description |
---|---|---|
cmdId |
Number |
required
Message ID. Value range: 1–10 |
msg |
String |
required
Message to be sent. The maximum length of one single message is 1 KB. |
reliable |
Boolean |
required
Whether reliable sending is enabled. Reliable sending can achieve a higher success rate but with a longer reception delay than unreliable sending. |
ordered |
Boolean |
required
Whether orderly sending is enabled, i.e., whether the data packets should be received in the same order in which they are sent; if so, a certain delay will be caused. |
Returns:
true: sent the message successfully; false: failed to send the message.
- Type
- Boolean
sendSEIMsg(msg, repeatCount) → {Boolean}
Use SEI channel to send custom message to all users in room
This API allows you to use TRTC's SEI channel to broadcast custom data to other users in the current room for signaling transfer.
The header of a video frame has a header data block called SEI. This API works by embedding the custom signaling data you want to send in the SEI block and sending it together with the video frame.
Therefore, the SEI channel has a better compatibility than sendCustomCmdMsg
as the signaling data can be transferred to the CSS CDN along with the video frame.
However, because the data block of the video frame header cannot be too large, we recommend you limit the size of the signaling data to only a few bytes when using this API.
The most common use is to embed the custom timestamp into video frames through this API so as to implement a perfect alignment between the message and video image (such as between the teaching material and video signal in the education
scenario). Other users in the room can receive the message through the onRecvSEIMsg
event.
Notice: This API has the following restrictions:
-
- The data will not be instantly sent after this API is called; instead, it will be inserted into the next video frame after the API call.
- Up to 30 messages can be sent per second to all users in the room (this limit is shared with
sendCustomCmdMsg
). - Each packet can be up to 1 KB (this limit is shared with
sendCustomCmdMsg
). If a large amount of data is sent, the video bitrate will increase, which may reduce the video quality or even cause lagging. - Each client can send up to 8 KB of data in total per second (this limit is shared with
sendCustomCmdMsg
). - If multiple times of sending is required (i.e.,
repeatCount
> 1), the data will be inserted into subsequentrepeatCount
video frames in a row for sending, which will increase the video bitrate. - If
repeatCount
is greater than 1, the data will be sent for multiple times, and the same message may be received multiple times in theonRecvSEIMsg
event; therefore, deduplication is required.
Parameters:
Name | Type | Description |
---|---|---|
msg |
String |
required
Data to be sent, which can be up to 1 KB (1,000 bytes) |
repeatCount |
Number |
required
Data sending count |
Returns:
true: the message is allowed and will be sent with subsequent video frames; false: the message is not allowed to be sent
- Type
- Boolean
playBGM(path)
Play background music (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
startPlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
path |
String |
required
Path of the music file |
stopBGM()
Stop background music (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
stopPlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
pauseBGM()
Pause background music (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
pausePlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
resumeBGM()
Resume background music (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
resumePlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
getBGMDuration(path) → {Number}
Get the total length of background music in ms (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
getMusicDurationInMS
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
path |
String |
required
Path of the music file |
Returns:
The length of the specified music file is returned. -1 indicates failure to get the length.
- Type
- Number
setBGMPosition(pos)
Set background music playback progress (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
seekMusicToPosInTime
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
pos |
Number |
required
Unit: ms |
setBGMVolume(volume)
Set background music volume (deprecated)
When background music is started. This API is used to set background music volume for both local and remote user.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setAllMusicVolume
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. Value range: 0 - 200; default: 100 |
setBGMPlayoutVolume(volume)
Set the local playback volume of background music (deprecated)
When background music is started. This API can be used to set the playing volume of background music for local user.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setMusicPlayoutVolume
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. Value range: 0 - 100. Default value: 100 |
setBGMPublishVolume(volume)
Set the remote playback volume of background music (deprecated)
When background music is started. This API can be used to set the playing volume of background music for remote user.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setMusicPublishVolume
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. Value range: 0 - 100. Default value: 100 |
startSystemAudioLoopback(path)
Enable system audio capturing
This API captures audio data from the sound card of the anchor’s computer and mixes it into the current audio stream of the SDK. This ensures that other users in the room hear the audio played back by the anchor’s computer. In online education scenarios, a teacher can use this API to have the SDK capture the audio of instructional videos and broadcast it to students in the room. In live music scenarios, an anchor can use this API to have the SDK capture the music played back by his or her player so as to add background music to the room.
Parameters:
Name | Type | Description |
---|---|---|
path |
String |
required
If this parameter is empty, the audio of the entire system is captured. If path is not empty, the path should be an App whose sound will be captured. |
stopSystemAudioLoopback()
Stop system audio capturing
setSystemAudioLoopbackVolume(volume)
Set the volume of system audio capturing
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Set volume. Value range: [0, 150]. Default value: 100 |
setMusicObserver(observer)
Setting the background music callback
Before playing background music, please use this API to set the music callback, which can inform you of the playback progress.
Parameters:
Name | Type | Description |
---|---|---|
observer |
TRTCMusicPlayObserver |
required
Backgroug music playing event observer |
startPlayMusic(musicParam, observeropt)
Starting background music
Example
import TRTCCloud, { AudioMusicParam } from 'trtc-electron-sdk';
const rtcCloud = TRTCCloud.getTRTCShareInstance();
// set music playing observer
rtcCloud.setMusicObserver({
onStart: (id: number, errCode: number) => {
console.log(`onStart, id: ${id}, errorCode: ${errCode}`);
},
onPlayProgress: (id: number, curPtsMS: number, durationMS: number) => {
console.log(`onPlayProgress, id: ${id}, curPtsMS: ${curPtsMS}, durationMS: ${durationMS}`);
},
onComplete: (id: number, errCode: number) => {
console.log(`onComplete, id: ${id}, errCode: ${errCode}`);
}
});
// start playing background music
const params = new AudioMusicParam();
params.id = 1;
params.path = 'path';
params.publish = true;
rtcCloud.startPlayMusic(params);
Parameters:
Name | Type | Description |
---|---|---|
musicParam |
AudioMusicParam |
required
Music parameter |
observer |
TRTCMusicPlayObserver |
Deprecated! Background music playing observer. You should use |
stopPlayMusic(id)
Stopping background music
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
pausePlayMusic(id)
Pausing background music
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
resumePlayMusic(id)
Resuming background music
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
getMusicDurationInMS(path) → {Number}
Getting the total length (ms) of background music
Parameters:
Name | Type | Description |
---|---|---|
path |
String |
required
Path of the music file |
Returns:
The length of the specified music file is returned. -1 indicates failure to get the length.
- Type
- Number
seekMusicToPosInTime(id, pts)
Setting the playback progress (ms) of background music
Notice: Do not call this API frequently as the music file may be read and written to each time the API is called, which can be time-consuming. Wait till users finish dragging the progress bar before you call this API. The progress bar controller on the UI tends to update the progress at a high frequency as users drag the progress bar. This will result in poor user experience unless you limit the frequency.
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
pts |
Number |
required
Unit: millisecond |
setAllMusicVolume(volume)
Setting the local and remote playback volume of background music
This API is used to set the local and remote playback volume of background music.
- Local volume: the volume of music heard by anchors
- Remote volume: the volume of music heard by audience
Notice: If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. Value range: 0 - 200; default: 100 |
setMusicPlayoutVolume(id, volume)
Setting the local playback volume of a specific music track
This API is used to control the local playback volume (the volume heard by anchors) of a specific music track.
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
volume |
Number |
required
Volume. Value range: 0-100. default: 100 |
setMusicPublishVolume(id, volume)
Setting the remote playback volume of a specific music track
This API is used to control the remote playback volume (the volume heard by audience) of a specific music track.
Notice: If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.
Parameters:
Name | Type | Description |
---|---|---|
id |
Number |
required
Music ID |
volume |
Number |
required
Volume. Value range: 0-100; default: 100 |
enableVoiceEarMonitor(enable)
Turn on the ear monitor.
After the anchor opens the ear monitor, he can hear his own voice collected by the microphone in the headset. The special effect is applicable to the application scene of the anchor singing.
It should be noted that due to the high hardware delay of the Bluetooth headset, this special effect cannot be turned on when the anchor wears the Bluetooth headset. Please try to prompt the anchor to wear the wired headset on the user interface. At the same time, it should be noted that not all mobile phones can achieve excellent ear return effect after turning on this special effect, and we have blocked the special effect for some mobile phones with poor ear return effect.
Parameters:
Name | Type | Description |
---|---|---|
enable |
Boolean |
required
Whether to open ear monitor. |
setVoiceEarMonitorVolume(volumn)
Setting ear monitor volume
Through this interface, you can set the volume level of the sound in the ear monitor effect.
If you feel that the volume is still too low after setting the volume to 100, you can increase the volume to a maximum of 150, but please note that setting the volume above 100 may risk causing distortion, so please proceed with caution.
Parameters:
Name | Type | Description |
---|---|---|
volumn |
Number |
required
Volume level, ranging from 0 to 100, default value: 100. |
setVoiceCaptureVolume(volume)
Setting voice volume
This interface allows you to set the volume level of the voice, which is typically used in conjunction with the setAllMusicVolume interface for adjusting the music volume. It helps to tune the respective volume proportions of voice and music before mixing.
If you find that the volume is still too low after setting the volume to 100, you can increase the volume to a maximum of 150. However, please note that setting the volume above 100 may risk causing distortion, so please proceed with caution.
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume value. The value ranges from 0 to 100. Default value: 100. |
setVoicePitch(pitch)
Setting voice pitch
This interface allows you to set the pitch of the voice, achieving the purpose of changing the tone without altering the speed.
Parameters:
Name | Type | Description |
---|---|---|
pitch |
Number |
required
The pitch value, ranging from -1.0f to 1.0f, with a default value of 0.0f. |
setVoiceChangerType(type)
Setting voice change effects
This API is used to set voice change effects for human voice.
Note: The set effect will automatically expire after exiting the room. If you need the corresponding effect the next time you enter the room, you need to call this interface again to set it.
Parameters:
Name | Type | Description |
---|---|---|
type |
TRTCVoiceChangerType |
required
Voice change type. |
setVoiceReverbType(type)
Setting voice reverb effects
This API is used to set reverb effects for human voice
Note: Effects become invalid after room exit. If you want to use the same effect after you enter the room again, you need to set the effect again using this API.
Parameters:
Name | Type | Description |
---|---|---|
type |
TRTCVoiceReverbType |
required
Reverb effect type |
playAudioEffect(effect)
Play audio effect (deprecated)
Each audio effect has a unique ID. You can use the ID to start, pause or stop the effect.
If you want to play several effect in the same time, please set different ID for each effect. If you use same ID for different effect, the last one will take effect.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
startPlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
effect |
TRTCAudioEffectParam |
required
Sound effect parameter |
Throws:
setAudioEffectVolume(effectId, volume)
Set audio effect volume (deprecated)
Notice: This API will override the volume setting by setAllAudioEffectsVolume
.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setMusicPublishVolume
,setMusicPlayoutVolume
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
effectId |
Number |
required
Effect ID |
volume |
Number |
required
Volume value. Value range: 0 - 100. Default value: 100 |
stopAudioEffect(effectId)
Stop audio effect (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
stopPlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
effectId |
Number |
required
Effect ID |
stopAllAudioEffects()
Stop all audio effects (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0.
setAllAudioEffectsVolume(volume)
Set all audio effects volume (deprecated)
Notice: This API will override the volume setting by setAudioEffectVolume
.
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
setAllMusicVolume
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
volume |
Number |
required
Volume Value. Value range: 0 - 100. Default value: 100 |
pauseAudioEffect(effectId)
Pause audio effect (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
pausePlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
effectId |
Number |
required
Effect ID |
resumeAudioEffect(effectId)
Resume audio effect (deprecated)
- Deprecated:
-
- This API has been deprecated since TRTC SDK 8.0. Please use
resumePlayMusic
instead.
- This API has been deprecated since TRTC SDK 8.0. Please use
Parameters:
Name | Type | Description |
---|---|---|
effectId |
Number |
required
Effect ID |
startSpeedTest(params) → {Number}
Start network speed test (used before room entry)
As TRTC involves real-time audio/video transfer services very sensitive to the transfer latency, it has high requirements for network stability. For most users, if their network environments are below TRTC's minimum requirements, direct room entry will cause a very poor user experience. The recommended approach is to perform the network speed test before the user enters the room, so that a reminder can be displayed on the UI to prompt the user to switch to a better network (such as from Wi-Fi to 4G) first before room entry if the user's network is poor.
Notice:
- The speed measurement process will incur a small amount of basic service fees, See Purchase Guide > Base Services.
- Please perform the Network speed test before room entry, because if performed after room entry, the test will affect the normal audio/video transfer, and its result will be inaccurate due to interference in the room.
- Only one network speed test task is allowed to run at the same time.
Parameters:
Name | Type | Description | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
params |
TRTCSpeedTestParams |
required
Speed test parameter. Properties
|
Returns:
interface call result, <0: failure
- Type
- Number
stopSpeedTest()
Stop network speed test
startCameraDeviceTest(view)
Start camera testing
After calling this API, an onFirstVideoFrame
event will be emitted.
Notice: You can use the setCurrentCameraDevice
API to switch between cameras during testing.
Parameters:
Name | Type | Description |
---|---|---|
view |
HTMLElement |
required
HTML Element the display the camera video image |
stopCameraDeviceTest()
Stop camera testing
startMicDeviceTest(interval, playbackopt)
Start microphone testing
After calling this API, an onTestMicVolume
event will be emitted.
This API is used to test whether the microphone functions properly. The microphone volume detected (value range: 0-100) is returned via onTestMicVolume
event notification.
Parameters:
Name | Type | Description |
---|---|---|
interval |
Number |
required
Interval of volume notification. Unit: ms. Better to be great than 200. |
playback |
Boolean |
Whether to play back the microphone sound. The user will hear his own sound when testing the microphone if |
stopMicDeviceTest()
Stop microphone testing
startSpeakerDeviceTest(testAudioFilePath)
Start speaker testing
After calling this API, an onTestSpeakerVolume
event will be emitted.
This API is used to test whether the audio playback device functions properly by playing a specified audio file. If users can hear audio during testing, the device functions properly.
Parameters:
Name | Type | Description |
---|---|---|
testAudioFilePath |
String |
required
Path of the audio file. Path should be UTF-8 encoded. Support format: WAV, MP3. |
stopSpeakerDeviceTest()
Stop speaker testing
getSDKVersion() → {String}
Get SDK version information
Returns:
UTF-8 encode version string
- Type
- String
setLogLevel(level)
Set log output level
Parameters:
Name | Type | Description |
---|---|---|
level |
TRTCLogLevel |
required
Output log level,Default value: TRTCLogLevelNone
|
setConsoleEnabled(enabled)
Enable/Disable console log printing
Parameters:
Name | Type | Description |
---|---|---|
enabled |
Boolean |
required
Specify whether to enable it, which is disabled by default |
setLogCompressEnabled(enabled)
Enable/Disable local log compression
If compression is enabled, the log size will significantly reduce, but logs can be read only after being decompressed by the Python script provided by Tencent Cloud. If compression is disabled, logs will be stored in plaintext and can be read directly in Notepad, but will take up more storage capacity.
Parameters:
Name | Type | Description |
---|---|---|
enabled |
Boolean |
required
Specify whether to enable it, which is enabled by default |
setLogDirPath(path)
Set local log storage path
You can use this API to change the default storage path of the SDK's local logs, which is as follows:
- Windows: C:/Users/[username]/AppData/Roaming/liteav/log, i.e., under
%appdata%/liteav/log
. - macOS: under
sandbox Documents/log
.
Notice: Please be sure to call this API before all other APIs and make sure that the directory you specify exists and your application has read/write permissions of the directory.
Parameters:
Name | Type | Description |
---|---|---|
path |
String |
required
Log storage path, should be UTF-8 encoded string |
setLogCallback(callback)
Set log callback
Notice: If you set a log callback, the SDK's log information will be called back to you, and you need to handle it in your code. If callback is null, the log callback is canceled and the SDK will no longer call back the log.
Parameters:
Name | Type | Description |
---|---|---|
callback |
function | null |
required
callback function. The parameter format of Function is (log:string, level:TRTCLogLevel, module:string) => void. |
callExperimentalAPI(jsonStr)
Call experimental APIs
Notice: This API is used to enable some experimental feature
Parameters:
Name | Type | Description |
---|---|---|
jsonStr |
String |
required
JSON string of experimental interface and parameter configuration |
setRenderMode(mode)
Set render mode(deprecated)
- Deprecated:
-
- Deprecated from SDK version 10.3.403. SDK will do nothing when this interface is invoked. SDK will choose video rendering method automatically, such as: WebGL, Canvas 2D or HTML video element.
Parameters:
Name | Type | Default | Description |
---|---|---|---|
mode |
Number |
1
|
required
|
setPluginParams(type, config)
Set custom plugin options
You should call this API before calling addPlugin
to set plugin options, otherwise the plugin will not be started and run.
Parameters:
Name | Type | Description |
---|---|---|
type |
TRTCPluginType |
required
plugin type |
config |
TRTCVideoProcessPluginOptions | TRTCMediaEncryptDecryptPluginOptions | TRTCAudioProcessPluginOptions |
required
plugin config options |
addPlugin(options) → {TRTCPluginInfo}
Add plugin
Parameters:
Name | Type | Description | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
options |
required
plugin info Properties
|
Returns:
- Type
- TRTCPluginInfo
removePlugin(id, deviceIdopt)
Remove plugin
Parameters:
Name | Type | Description |
---|---|---|
id |
String |
required
required, plugin ID |
deviceId |
String |
camera ID. When you start multi-camera, you should specify the camera to use. |
setPluginCallback(pluginCb)
Set plugin event callback which will be called when plugin event occurs
This API can only be called once globally.
Parameters:
Name | Type | Description |
---|---|---|
pluginCb |
function |
required
plugin event callback function
|