Screen sharing using webrtc in Chrome



Screen sharing is to share the screen content of your computer to others in the form of video, so as to improve communication efficiency. As a way to realize interaction, screen sharing often appears in users’ life scenes, such as:

  • In video conference, screen sharing can share the speaker’s local files, data, web pages, PPT and other images to other participants.
  • In the online classroom scene, screen sharing can show the teacher’s courseware, notes, lecture content and other pictures to students.

So how can we simply and conveniently realize screen sharing? This article will introduce in detail how to use webrtc to quickly realize screen sharing under chrome.

Screen sharing implementation

Screen sharing can be roughly divided into screen stream capture, stream transmission and screen stream rendering.

  • Screen stream capture: you can use the getdisplaymedia method provided by chrome to capture screen stream and provide data for streaming.
  • Streaming: webrtc is used to transmit data to the server or another client. Because the low latency and anti weak network capabilities provided by webrtc can ensure a good user experience, it is a very suitable choice
  • Screen stream rendering: you can use canvas or video to render and play. In this article, you can use video element to render and play.

Some preparation

Windows 10 system
Chrome 93 (note that this article uses the API provided by chrome to capture screen streams, which requires chrome 72 or above)
Prepare an HTML file and write another video tag. One is used to play the locally captured screen share stream, and the other is used to render and play the screen share stream transmitted from the remote end

<div id=”videos”>

<div class="video-container">
    <h2>Locally captured screen share stream</h2>
    <video id="srcVideo" playsinline controls muted loop class="srcVideo">

<div class="video-container">
    <h2>Screen shared stream rendering transmitted from the remote end</h2>
    <video id="shareStreamVideo" playsinline autoplay muted class="shareStreamVideo"></video>


  • Prepare a JS file to write the logic of screen sharing, and introduce it in the previously prepared HTML file.

Screen stream capture

The captured screen stream can be obtained by using the navigator.mediadevices.getdisplaymedia (displaymediaoptions) method. This method is asynchronous. It will package the screen stream into a promise object. After parsing the promise, a mediastream object will be returned. We can take out the video or audio track from this object for streaming.

const srcVideo = document.getElementById('srcVideo');
const shareStreamVideo = document.getElementById('shareStreamVideo');

let srcStream;
let shareStream;
//Define the parameters of the capture stream
let displayMediaOptions = {
  video: {
    width: {max: 1280},
    height: {max: 720},
    frameRate: {ideal: 15}
navigator.mediaDevices.getDisplayMedia(option).then(stream => {
  srcStream = stream;
  srcVideo.srcObject = stream;;
  //Transport stream

It can be seen that different parameters can be set when capturing a stream to obtain a stream that can meet our needs. Here we set up to capture a 1280 * 720 resolution, 15 frame screen stream.

Streaming and rendering

Before we start streaming, we will introduce a concept. For the transmission of screen shared streaming, we will generally divide it into two modes according to different shared contents:

Clarity is preferred. This mode is applicable to the sharing of some static files, such as PTT, notes and some display text scenes. This mode can ensure that the resolution will not be reduced in case of poor network conditions, and ensure that the picture has a better definition. But at the same time, the frame rate may be reduced, resulting in a more stuck appearance.
Fluency gives priority to the sharing of some dynamic pictures, such as videos, dynamic web pages and other scenes, which is applicable to this mode. This mode can ensure that in case of bad network conditions, the shared picture will not be interrupted and stuck, and the frame rate will not be reduced. However, at the same time, the bit rate and resolution may be reduced, resulting in the inability to guarantee the clarity of the picture.
  here’s a good news. Webrtc has provided us with a way to set these two transmission modes, that is, set them through the contenthit property of mediastreamtrack

function setVideoTrackContentHints(stream, hint) {
  const track = stream.getVideoTracks()[0];
  if ('contentHint' in track) {
    track.contentHint = hint;
    if (track.contentHint !== hint) {
      console.log('Invalid video track contentHint: \'' + hint + '\'');
  } else {
    console.log('MediaStreamTrack contentHint attribute not supported');


function call() {
    //Clone stream
  shareStream = srcStream.clone();
  //"Detail" sets clarity first (or "text") and "motion" if you need to set fluency first
  setVideoTrackContentHints(shareStream, 'detail');
   //Establish peerconnection
  establishPC(shareStreamVideo, shareStream);

Next, we create a peerconnection and the transport stream. Then simulate the received stream and render it. We will create two peerconnections here to simulate two clients.

function establishPC(videoTag, stream) {
   //Create two peerconnections to simulate two clients. PC1 is equivalent to local and PC2 is equivalent to remote
  const pc1 = new RTCPeerConnection(null);
  const pc2 = new RTCPeerConnection(null);
  pc1.onicecandidate = e => {
    //It can be understood as notifying PC2 of the address of connecting PC1
    onIceCandidate(pc1, pc2, e);
  //It can be understood as notifying PC1 to receive PC2 address
  pc2.onicecandidate = e => {
    onIceCandidate(pc2, pc1, e);
  //Add the stream to be transmitted to peerconnection
  stream.getTracks().forEach(track => pc1.addTrack(track, stream));
    //Setting offer and answer can be understood as notifying media information such as encoding and decoding on both sides and the other side
    .then(desc => {
    .then(() => pc2.setRemoteDescription(desc))
    .then(() => pc2.createAnswer())
    .then(answerDesc => onCreateAnswerSuccess(pc1, pc2, answerDesc))
    .catch(e => console.log('Failed to create session description: ' + e.toString()));
  //The remote end receives the stream and gives it to video to play
    pc2.ontrack = event => {
    if (videoTag.srcObject !== event.streams[0]) {
      videoTag.srcObject = event.streams[0];


function onSetSessionDescriptionError(error) {
  console.log('Failed to set session description: ' + error.toString());

function onCreateAnswerSuccess(pc1, pc2, desc) {
    .then(() => pc1.setRemoteDescription(desc))

function onIceCandidate(pc, otherPc, event) {

So far, we have finished writing the code. Let’s see how the effect is
Screen sharing using webrtc in Chrome

After selecting the content to be shared, the selected content is shared normally
Screen sharing using webrtc in Chrome
Screen sharing using webrtc in Chrome


So far, we have introduced how to simply realize the screen sharing function on chrome, but the actual use scenarios will be more complex, such as sound sharing, screen sharing with other types of terminals such as Android and IOS, network penetration under complex networks and compatibility processing at each end. For these problems, after a lot of development and testing, Tangqiao webrtc team has provided a complete set of solutions. Welcome to understand and use. Tangqiao webrtc team will also continue to share interesting technical problems in audio and video with you. You are welcome to correct and communicate.

Recommended Today

Heavyweight Tencent cloud open source industry’s first etcd one-stop governance platform kstone

​ Kstone open source At the kubecon China Conference held by CNCF cloud native foundation on December 9, 2021,Tencent cloud container tke team released the open source project of kstone etcd governance platform. KstoneIt was initiated by the TKE team of Tencent cloud containerCloud native one-stop etcd governance project based on kubernetes。 The project originates […]