/UnityWebRTC/Assets/StreamingAssets/Certs`
16 | * the file `req.conf` is used to configure the certificate. You need to adapt `CN` and `subjectAltName` if you want to access the stream from somewhere else than `localhost`.
17 | * **in this folder** execute
18 | - `openssl req -config req.conf -x509 -newkey rsa:4096 -keyout localhost_key.pem -out localhost.pem -nodes -days 3650`
19 | - `openssl pkcs12 -export -in localhost.pem -inkey localhost_key.pem -out localhost.pfx.bin -nodes`
20 | - if you have paid close attention, you will have realized that the resulting key hasn't been saved as a `*.pfx` key file (as done in the referenced instruction) but `*.pfx.bin`! It seems that `*.pfx` key files in `StreamingAssets` are not exported into UWP packages. By changing the file ending we can workaround that.
21 | - currently, the unity app expects the name to be `localhost.pfx.bin`. Use that name even if the certificate was generated for another domain and/or IP (or change it in `Assets/Scripts/WebSocketSignaler.cs`)
22 | * If you have chosen the default configuration, you can make Chrome/Chromium accept self-signed certificates for localhost by typing `chrome://flags/#allow-insecure-localhost` into the address bar and enabling that option.
23 | * Alternatively, you can access the `WebSocketSignaler` via `https` instead of `wss` (eg. type `https:\\localhost:9999` into your browser while the project is running) and manually accept the certificate.
24 |
25 | ## Limitations HoloLens 2
26 |
27 | As of 2020-10-16, `Mixed Reality WebRTC` does not support HoloLens 2 builds targeting `ARM64` ([ref](https://github.com/microsoft/MixedReality-WebRTC/issues/414)). `ARM` (32-bit) seems to be [supported](https://github.com/microsoft/MixedReality-WebRTC/issues/235) though.
28 |
29 | ## Usage
30 |
31 | Open `Assets/Scenes/SignalerExample` in Unity and check the `Launcher` GameObject. The `RTCServer` features some settings which can be adjusted:
32 |
33 | * `NeedVideo` -- whether the video stream should be sent
34 | * `NeedAudio` -- whether an audio track should be send
35 | * `VideoWidth`, `VideoHeight` and `VideoFps` should be set to values your WebCam/HoloLens camera is actually supporting. Otherwise the stream might not be initialized successfully. `VideoFps` can be set to 0 to ignore that option.
36 | * `VideoProfileId` -- Some HoloLens2 resolutions are only available via a profile. This project will prompt all found profiles and resolutions when initialized. When `VideoProfileId` is empty, it will be ignored.
37 | * `ConnectionType` -- Should be set to `TCP` or `WebSocket`, depending on the client (TCP: Python, WebSocket: browser) you plan to use
38 | * `UseRemoteStun` -- when enabled, the server `stun.l.google.com:19302` will be promoted to the `PeerConnection`. This might help in cases where server and client cannot reach each other directly.
39 |
40 | ### Tested Resolutions
41 |
42 | #### HoloLens 1 (x86)
43 |
44 | * `1280x720@30fps`
45 | * `896x504@24fps`
46 |
47 | #### HoloLens 2 (ARM)
48 |
49 | * `896x504@15fps`
50 | * `Profile '{6B52B017-42C7-4A21-BFE3-23F009149887},120': 640x360@30fps`
51 |
--------------------------------------------------------------------------------
/python/README.md:
--------------------------------------------------------------------------------
1 | # Receiving MR WebRTC video stream via TCP.
2 |
3 | This Python sample makes use of `aiortc` to send and receive WebRTC messaged and `OpenCV` to show an image. The message structure of `Mixed Reality WebRTC` varies a bit from the structure the default `TCPSignaler` expects. `UnityTcpSignaling` takes care of this. Note that this connection is not encrypted and should be used with caution.
4 |
5 | Requirements can be install with `pip`:
6 |
7 | ```
8 | pip install -r requirements.txt
9 | ```
10 |
11 | Note that depending on your platform, installing OpenCV might be more difficult. I would recommend using [conda](https://docs.conda.io/en/latest/miniconda.html) to setup and maintain python environments in this case. Installing OpenCV can be done with `conda install opencv` on most platforms.
12 |
13 | The program can be launched from the command line:
14 |
15 | ```python cli.py --host localhost --port 9095```
16 |
17 | After a couple of seconds, an `OpenCV` window should pop up, showing the stream. The program can be closed with `Ctrl+C` in the command line.
--------------------------------------------------------------------------------
/python/cli.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import asyncio
3 | import logging
4 | import math
5 | from collections import deque
6 |
7 | import cv2
8 | import numpy
9 | import logging
10 |
11 | from aiortc import (
12 | RTCIceCandidate,
13 | RTCPeerConnection,
14 | RTCSessionDescription,
15 | VideoStreamTrack,
16 | )
17 | from aiortc.contrib.signaling import BYE, add_signaling_arguments, create_signaling
18 |
19 | from receiver import OpenCVReceiver
20 | from signaler import UnityTcpSignaling
21 |
22 | _LOGGER = logging.getLogger("mr.webrtc.python")
23 | _LOGGER.addHandler(logging.NullHandler())
24 |
25 |
26 | async def run(pc, player, receiver, signaling, role, queue):
27 | def add_tracks():
28 | if player and player.audio:
29 | pc.addTrack(player.audio)
30 |
31 | if player and player.video:
32 | pc.addTrack(player.video)
33 | else:
34 | pc.addTrack(FlagVideoStreamTrack())
35 |
36 | @pc.on("track")
37 | def on_track(track):
38 | _LOGGER.info("Receiving %s" % track.kind)
39 | receiver.addTrack(track)
40 |
41 | # connect signaling
42 | _LOGGER.info("Waiting for signaler connection ...")
43 | await signaling.connect()
44 |
45 | # consume signaling
46 | while True:
47 | obj = await signaling.receive()
48 | # if obj is not None:
49 | # print(obj)
50 |
51 | if isinstance(obj, RTCSessionDescription):
52 | await pc.setRemoteDescription(obj)
53 | await receiver.start()
54 |
55 | async def check_queue():
56 | while True:
57 | if len(queue):
58 | img = queue.pop()
59 | queue.clear()
60 | try:
61 | cv2.imshow("hello", img)
62 | cv2.waitKey(1)
63 | except Exception as e:
64 | print(e)
65 | await asyncio.sleep(0.05)
66 |
67 | asyncio.create_task(check_queue())
68 |
69 | if obj.type == "offer":
70 | # send answer
71 | # add_tracks()
72 | await pc.setLocalDescription(await pc.createAnswer())
73 | await signaling.send(pc.localDescription)
74 | elif isinstance(obj, RTCIceCandidate):
75 | await pc.addIceCandidate(obj)
76 | elif obj is BYE:
77 | print("Exiting")
78 | break
79 |
80 |
81 | if __name__ == "__main__":
82 | import time
83 |
84 | parser = argparse.ArgumentParser(description="Video stream from the command line")
85 | parser.add_argument("--verbose", "-v", action="count")
86 | parser.add_argument("--host", "-ip", help="ip address of signaler/sender instance")
87 | parser.add_argument("--port", "-p", help="port of signaler/sender instance")
88 | add_signaling_arguments(parser)
89 | args = parser.parse_args()
90 |
91 | if args.verbose:
92 | logging.basicConfig(level=logging.DEBUG)
93 | else:
94 | logging.basicConfig(level=logging.WARN)
95 | _LOGGER.setLevel(level=logging.INFO)
96 |
97 | host = args.host or "localhost"
98 | port = args.port or 9095
99 |
100 | # create signaling and peer connection
101 | signaling = UnityTcpSignaling(host=host, port=port)
102 | pc = RTCPeerConnection()
103 |
104 | player = None
105 | frame_queue = deque()
106 | receiver = OpenCVReceiver(queue=frame_queue)
107 | # run event loop
108 | loop = asyncio.get_event_loop()
109 | try:
110 | loop.run_until_complete(
111 | run(
112 | pc=pc,
113 | player=player,
114 | receiver=receiver,
115 | signaling=signaling,
116 | role="answer",
117 | queue=frame_queue,
118 | )
119 | )
120 | except KeyboardInterrupt:
121 | pass
122 | finally:
123 | # cleanup
124 | _LOGGER.info("Shutting down receiver and peer connection.")
125 | loop.run_until_complete(receiver.stop())
126 | loop.run_until_complete(signaling.close())
127 | loop.run_until_complete(pc.close())
128 |
--------------------------------------------------------------------------------
/python/receiver.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | from aiortc.contrib.media import MediaRecorderContext
3 | import asyncio
4 |
5 |
6 | class OpenCVReceiver:
7 | def __init__(self, queue):
8 | self.__tracks = []
9 | self.__tasks = []
10 | self.queue = queue
11 |
12 | def addTrack(self, track):
13 | self.__tracks.append(track)
14 |
15 | async def start(self):
16 | for track in self.__tracks:
17 | self.__tasks.append(asyncio.ensure_future(self.__run_track(track)))
18 |
19 | async def stop(self):
20 | for task in self.__tasks:
21 | task.cancel()
22 |
23 | async def __run_track(self, track):
24 | while True:
25 | try:
26 | frame = await track.recv()
27 | self.queue.append(frame.to_ndarray(format="bgr24"))
28 | except MediaStreamError:
29 | pass
30 |
--------------------------------------------------------------------------------
/python/requirements.txt:
--------------------------------------------------------------------------------
1 | aiortc>=1.0
2 | opencv-python
3 |
--------------------------------------------------------------------------------
/python/signaler.py:
--------------------------------------------------------------------------------
1 | from aiortc.contrib.signaling import TcpSocketSignaling, candidate_to_sdp, candidate_from_sdp, BYE
2 | from aiortc import RTCIceCandidate, RTCSessionDescription
3 | import json
4 | import asyncio
5 |
6 |
7 | def unity_object_to_string(obj):
8 | if isinstance(obj, RTCSessionDescription):
9 | message = {"type": "sdp", obj.type: obj.sdp}
10 | elif isinstance(obj, RTCIceCandidate):
11 | message = {
12 | "candidate": "candidate:" + candidate_to_sdp(obj),
13 | "sdpMid": obj.sdpMid,
14 | "sdpMLineindex": obj.sdpMLineIndex,
15 | "type": "ice",
16 | }
17 | else:
18 | assert obj is BYE
19 | message = {"type": "bye"}
20 | return json.dumps(message, sort_keys=True)
21 |
22 |
23 | def unity_object_from_string(message_str):
24 | message = json.loads(message_str)
25 | if message["type"] == "sdp":
26 | if "answer" in message:
27 | return RTCSessionDescription(type="answer", sdp=message["answer"])
28 | else:
29 | return RTCSessionDescription(type="offer", sdp=message["offer"])
30 | elif message["type"] == "ice" and message["candidate"]:
31 | candidate = candidate_from_sdp(message["candidate"].split(":", 1)[1])
32 | candidate.sdpMid = message["sdpMid"]
33 | candidate.sdpMLineIndex = message["sdpMLineindex"]
34 | return candidate
35 | elif message["type"] == "bye":
36 | return BYE
37 |
38 |
39 | class UnityTcpSignaling(TcpSocketSignaling):
40 | async def receive(self):
41 | await self._connect(False)
42 | try:
43 | data = await self._reader.readuntil()
44 | except asyncio.IncompleteReadError:
45 | return
46 | return unity_object_from_string(data.decode("utf8"))
47 |
48 | async def send(self, descr):
49 | await self._connect(True)
50 | data = unity_object_to_string(descr).encode("utf8")
51 | self._writer.write(data + b"\n")
52 |
--------------------------------------------------------------------------------
/web/README.md:
--------------------------------------------------------------------------------
1 | # Receiving MR WebRTC video stream via WebSockets in the browser.
2 |
3 | The file `index.html` can be opened offline and has been tested with Chrome and Firefox. Make sure to read [how to create and accept an SSL certificate](../UnityWebRTC/README.md) in the README of `UnityWebRTC`. Adjust port (and host) in the text field and hit connect. Shortly after, you should be able to see the video stream from `UnityWebRTC`.
4 |
--------------------------------------------------------------------------------
/web/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 | Remote Address:
108 |
109 |
110 |
--------------------------------------------------------------------------------