A WebRTC server in your browser (virtual server; private audio)

Speaker.app / zenRTC / Phantom Server

Source Code available on GitHub

Speaker.app is a batteries-included, quasi-decentralized, alternative free speech audio platform that is compatible on any device that supports a modern web browser.

R…


This content originally appeared on DEV Community and was authored by jzombie

tweet-url

Speaker.app / zenRTC / Phantom Server

Source Code available on GitHub

Speaker.app is a batteries-included, quasi-decentralized, alternative free speech audio platform that is compatible on any device that supports a modern web browser.

Rather than a centralized server providing proxying of streams from each participant to other participants (i.e. an MCU / SFU), one can choose to host a network (or "room") where others can connect to, either publicly or privately. The network hosting participant's web browser acts as the "server" for the other participants to connect to on the given network, and all proxying is done, including message storage and relaying, through that browser.

Public networks are visible in a "network discovery" view, which serves as the default homepage for the application.

No user accounts or passwords are required to join a public network, and user identities are generated using Ethereum, with a randomized user profile, by default. Users can change their user profile to their liking, while their profile information is stored locally via local storage.

To see it live, navigate to https://speaker.app.

Table of Contents

Browser Support Matrix

Chrome Edge (Chromium) Firefox Safari IE
Android N/A N/A
iOS [transcoder host only] N/A N/A N/A
Linux N/A N/A
macOS N/A
Windows N/A N/A

Note, on every OS except iOS, Chrome is the recommended browser; On iOS, Safari should be used.

What's in the Box

Frontend: Built with create-react-app; state is managed with multiple Providers and accessible via useContext hooks.

Backend: Node.js app, using Socket.io and Express. Cluster module is utilized to utilize multiple CPUs and a Redis store is utilize to scale Socket.io across the CPUs.

Redis: Utilized with Socket.io's Redis adapter to provide scalability of Socket.io across a cluster of Node.js running in different processes or servers, so they can all communicate, broadcast, and emit events to and from one another. This is mostly used in conjunction with the signaling layer to initiate WebRTC sessions & media, and most private communication happens over WebRTC data channels.

MongoDB: Network details (name, host, number of participants) are stored in MongoDB. When in development mode, Mongo Express is available at http://localhost:8081, and provides a web-based administrative interface.

Let's Encrypt: Free SSL certificates are managed via the linuxserver.io/docker-swag Docker image.

dev-ssl-proxy: In development, a self-signed SSL proxy is utilized in replacement of Let's Encrypt, to enable local development with SSL turned on (cam / mic / other HTML5-related APIs which require SSL by default).

Coturn: A STUN / TURN server for WebRTC NAT traversal is included in the Docker Compose configuration, but is not enabled by default.

Included WebRTC Experiments: Within the source code are some previous real-time, shared experience experiments such as a drum looper, a sound sampler (play piano / electric guitar w/ keyboard), text-to-speech, TensorFlow-based skeletal tracker, and a game emulator.

These experiments are mostly dormant and commented-out, but have made for some interesting demos in the past and may be re-enabled in the future.

Architecture Overview

Conventional WebRTC Network Topologies

Mesh Network

Mesh network example. (Illustration borrowed from simple-peer)

Most group-based WebRTC calls, which don't have a centralized MCU / SFU rely on each peer to send out an extra stream to multiple peer. This is not very efficient as for every participant added, every device connected must send out additional streams.

SFU

Centralized MCU / SFU example.

More advanced calling platforms utilize a centralized MCU / SFU. While this is more efficient in terms of the network, additional considerations, and money, are needed in order to scale out the backend infrastructure.

Speaker.app Peer-Based Network Topology

Using a topology similar to the MCU / SFU example above, Speaker.app attempts to solve the scalability issue without throwing a lot of extra money into hosting fees, by enabling individual participants to host their own networks, on their own hardware, using their own bandwidth, while at the same time providing greater privacy and flexibility.

zenRTC (built with simple-peer) is based on WebRTC, adding additional functionality such as user-level network strength indication, events over data channels, and P2P-based shared state syncing.

Phantom Server is a network host which runs in your web browser, and acts as the host, shared state manager, proxy, and transcoder for all connected participants within a WebRTC network.

Every participant connects to the Phantom Server via a P2P connection and Phantom Server handles the stream negotiations / network programming with the other peers.

Speaker.app is able to provide a quasi-decentralized MCU / SFU by enabling clients to run them in their own browsers, as a virtual machine.

At the time of writing the Chrome on the Apple M1 processor is by far the most efficient for doing browser-based streaming transcoding, compared to a variety of Intel processors which have been tested on, though development has mostly been done on Intel processors / Linux. ARM is the future, it seems.

Network hosting has also been tested on non-optimal hardware (i.e. 2018 Samsung J2; Intel i3) with adequate results for streaming 4K video streams to 4 participants. Good hardware such as the new Apple M1 processor allows much greater yields, and better scalability.

Inspiration to Create this Project

TLDR; Experimentation.

I was faced with a task for building a WebRTC bridge between two third party services in the virtual healthcare industry and after trying some various approaches, discovered that using a headless Chrome instance on the server was the path of least effort and less bugs to squash, though not necessarily being greatly efficient on its own.

Running a headless Chrome instance on the server is very versatile, in being that you've got a really solid WebRTC implementation baked in, with the ability to mix audio and video streams using JavaScript and the real DOM.

Wanting to continue pursuing the effort of a script-able WebRTC bridge using a web browser, and thinking of ways to potentially scale such a system, I made the decision to allow client-side devices to host these sessions, now no longer utilizing the headless Chrome instances as the main method of hosting sessions.

Getting Started

NOTE: If you wish to host your own network (or room) you DO NOT HAVE TO DO this, and can instead go https://speaker.app/setup/network/create and create your own network!

The following is ONLY if you wish to host the entire infrastructure yourself.

Dependencies / System Requirements

All environments require

  • Bash (Unix shell) If running the included Bash build scripts
  • Docker
  • Docker Compose

Development environments require

  • Node.js 12+

Recommended system requirements

The following should get the system up and running, though additional resources may be required for higher traffic environments. Presumably, these minimum requirements should host at least several dozen people concurrently before needing to add more RAM.

  • 2048 MB RAM (1048 MAY work if Coturn server is hosted separately)
  • Two CPU cores (one should work just fine for low traffic environments)

Building and Running

Some Bash scripts have been provided to help facilitate the starting and stopping of the respective environments. It is recommended to use these scripts instead of calling the Docker commands directly, as they will provide supplemental environment variables as well as any additional build instructions.

In development environments, most of the container volumes have a mount directly to the host so that the source code can be updated in the containers without rebuilding. See the respective docker.compose*.yml configurations and corresponding Dockerfile files for more details.

Set up the environment

Copy the sample environment.

$ cp .env.sample .env

Then populate .env with the configuration relevant to your environment.

Note that other environment variables are set within the docker-compose*.yml files and are intended to be considered static.

To build the Docker containers

Note that development environments may require additional dependencies to be installed.

IMPORTANT: If you are using a shell other than Bash, the following scripts should be proceeded with the "bash" command (i.e. "bash ./build.prod.sh").

$ ./build.prod.sh # Or ./build.dev.sh, depending on environment

To start the containers

$ ./start.prod.sh # Or ./start.dev.sh, depending on environment

To stop the containers

This stops the containers and tears down their temporary storage.

$ ./stop.sh # Stops any environment

Public Network Discovery / Private Networks

Public networks can be discovered on the default home page. Private networks do not appear in the public network discovery but can be accessed via URL or QR code.

Testing

Testing can be performed by running:

$ ./test.sh

Note, development packages will be automatically installed locally when testing.

At this time, testing is not fully automated. Several internal utilities are tested using Jest (via the above command), while device-specific testing is performed manually using BrowserStack.

BrowserStack

Contributing / Forking

Source-code contributions and forks are welcome!

Open an issue if you find something that needs to be addressed that you aren't going to address yourself.

For ideas of what to contribute, take a look at our open issues.

To contribute, fork the repository, create a new branch, add some code or documentation updates, then submit a PR.

Motto

To contribute, however slightly, to the commonwealth of all human innovation and experience.

Help Us Continue Writing Free Software

PayPal: https://www.paypal.com/paypalme/zenOSmosis

Buy Me a Coffee: https://www.buymeacoffee.com/Kg8VCULYI

License

GNU GENERAL PUBLIC LICENSE

Source Code

Source Code available on GitHub


This content originally appeared on DEV Community and was authored by jzombie


Print Share Comment Cite Upload Translate Updates
APA

jzombie | Sciencx (2021-09-30T15:10:07+00:00) A WebRTC server in your browser (virtual server; private audio). Retrieved from https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/

MLA
" » A WebRTC server in your browser (virtual server; private audio)." jzombie | Sciencx - Thursday September 30, 2021, https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/
HARVARD
jzombie | Sciencx Thursday September 30, 2021 » A WebRTC server in your browser (virtual server; private audio)., viewed ,<https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/>
VANCOUVER
jzombie | Sciencx - » A WebRTC server in your browser (virtual server; private audio). [Internet]. [Accessed ]. Available from: https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/
CHICAGO
" » A WebRTC server in your browser (virtual server; private audio)." jzombie | Sciencx - Accessed . https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/
IEEE
" » A WebRTC server in your browser (virtual server; private audio)." jzombie | Sciencx [Online]. Available: https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/. [Accessed: ]
rf:citation
» A WebRTC server in your browser (virtual server; private audio) | jzombie | Sciencx | https://www.scien.cx/2021/09/30/a-webrtc-server-in-your-browser-virtual-server-private-audio/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.