USB-C vs Thunderbolt 3 Dock for Streaming
Article
Ecamm Live
USB-C Docks get very hot!
When I added another Magewell Capture Card to my Ecamm Live setup to my USB-C dock – I noticed my video started to drop frames. I had a feeling that two capture cards including HDMI output was throwing a lot of data through the USB-C port.
2GB per second for capture cards
Cam Link uses about 2Gbps of bandwidth (Uncompressed 1080p60 video) and two Cam Links may be overloading the USB controller if you use more than one on a single USB port controller. Cam Link also by default comes in the bulk USB transfer mode. This mode of transferring audio/video over USB is more compatible with Windows systems, but may cause issues on macOS systems. Magewell cards do not need this configuration change.
Full Speed 40GB per second
40 Gbps data transfer rates is fast enough for whatever you plug in. You’ll maintain full speed even when fully loaded and running multiple peripherals at once. You can transfer 3 hours of 4K action camera footage in less than a minute. That’s 8x faster than USB 3.0.
Dropped Frames with USB-C
The reason why I chose the Belkin Thunderbolt 3 dock is because it had the necessary ports I needed in a small package. I do wish it had two USB 3.1 ports though.
Thunderbolt™ 3 Dock Mini HD
This compact dock connects to your computer via a single tethered cable. Dual HDMI 4K ports at 60Hz allow you to connect two 4K monitors to your laptop, to experience fast, high definition visuals across multiple screens. Use a keyboard, mouse, or other peripherals via the USB-A 3.0 and USB-A 2.0 ports and enjoy a secure and reliable network connection through the gigabit Ethernet port.
USB-C vs Thunderbolt 3 - What's the Difference?
Thunderbolt has evolved over the years (Thunderbolt, Thunderbolt 2, and Thunderbolt 3). The newest version- Thunderbolt 3- has now integrated USB-C in order to provide consumers with a universal port, making Thunderbolt 3 the internal capability and USB-C the shape of the port . Because of this upgrade, many are confused about the difference between the two.
Thunderbolt 3
Thunderbolt 3 travels at 40Gbps, making it twice as fast as Thunderbolt 2 for data connection.
Thunderbolt 3 can present video content on two 4K displays or one 5K display at 60Hz.
USB-C
USB types refer to the shape and design of the cable and port. USB versions refer to their capabilities (such as speed/ power) and cable compatibility. USB-C ports are used by almost all devices that support USB 3.1
USB Type C shares the same shape as Thunderbolt 3
Thunderbolt 3 vs USB-C Ports
Differences between USB-C and Thunderbolt 3 for Live Streaming
Thunderbolt 3 has the capabilities of USB-C, but USB-C does not have the capabilities of Thunderbolt 3. The Thunderbolt 3 port has the same design as the USB-C port. However, if you were to connect a Thunderbolt 3 cable to a USB-C port you would have restrictions on capabilities.
Speed
Capture cards require a lot of bandwidth to push data. It's speed allows for quicker access more data when compared to USB-C. Thunderbolt 3 works at 40Gbps while USB-C works at 10 Gbps.
USB-C Hubs are HOT
My USB-C hub gets VERY hot compared to my TB3 Dock. Heat definitely affects performance. The more devices that are connected, the hotter it gets.
USB Hubs may not have enough bandwidth
There could be bandwidth issues if too many devices are connected to the same internal USB hub or controller. Please make sure the USB hub is USB 3.0 compatible. USB ports are controlled by what is called a USB controller. These controllers are used for pairs of USB ports. This isn’t an issue with Thunderbolt 3.
Use different ports
Disconnecting other devices or connecting them to different ports can help.
If both USB ports are occupied by high bandwidth device such as USB capture devices, external hard drives, webcams, etc the available bandwidth may be used up.
Not all USB-C Cables are the Same
USB-C stands for the shape and type of connector, which is the same for all USB-C cables but not all cables support the same kind of protocols and transfer speeds.
Thunderbolt 3 Devices
A Thunderbolt 3 cable is required. For the best transfer rates and to make full use of the Thunderbolt 3 interface, we recommend using a cable that supports 40Gbps.
USB-C Devices
We recommend using either a USB cable that supports 10Gbps or a Thunderbolt cable that also supports USB 3.1 Gen 2.
Related Posts
TLDR (Too Long Didn't Read)
Buy a Thunderbolt 3 dock that has the ports you need
How to go Live with Zoom on Facebook or YouTube
Video
Ecamm Live
Learn how to do a LIVE Interview on Facebook (Facebook Live with 2+ People). If you’re looking to host a live interview on Facebook, the best way to do it now is with Skype since it uses NDI. NDI allows you to easily switch full screen videos of any of your participants. Unfortunately, not many people like to use Skype even though they are owned by Microsoft.
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
Zoom became very popular despite many of its security issues. Unfortunately, it takes a lot more manual setup than Skype to create a nice looking interview. Most people broadcast straight to Facebook or Youtube right from Zoom, but if you want a more polished look, this is where a software switcher like Ecamm live comes in.
Zoom requires more setup
With Skype the audio is can also be routed. You can use system audio with Zoom, but you’ll get better control of your audio with software called LoopBack.
Loopback enables you to combine the audio from multiple sources, including microphones and applications like Ecamm Live, then provide that combined audio to voice chat applications to be heard by all participants.
Use Ecamm Screenshare or Virtual Cam
Your Zoom participants typically only see a camera. If you want them to see what is being broadcast, you’ll need to use Virtual Camera, Screenshare (in Ecamm), or loop through a HDMI capture card to bring the video back into Zoom. Otherwise, if they try to watch on YouTube or Facebook, it will very delayed (and can cause echo if their audio is loud enough)
Related Posts
How to improve Mac Streaming Performance
Video
Ecamm Live
This video is about How to improve Mac Streaming Performance by removing a bunch of cache files before going Live. I use these tips to help clear out space or even help resolve issues. Keep in mind, you should have good network bandwidth. If you are worried about deleting files, check out this article.
Cache Files can become corrupted after a crash
What is Cache on Mac? In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster. It plays the same role on macOS. The cache on Mac is where OS X stores data it needs to access often (such as extensions, images, or other components). Why you should clear cache junk on macOS Mojave/Catalina? For the good part, storing data in cache can make your Mac run faster because it can access the stored data quickly. However, the cache may also get corrupt due to the invalid software updates, system conflicts, or unexpected quits, and this can cause macOS problems. Old cache files do nothing but clutter your system and slow down your Mac through all the wasted space. User cache makes up the majority of junk data on macOS. Your applications accumulate user’s cache data on a hard disk the longer they are in use. Some apps and utilities can build up cache sizes that reach into gigabytes. Cache files can become corrupted over time as well.
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
A full SSD will slow down your Mac
The benchmarks are clear: Solid-state drives slow down as you fill them up. Fill your solid-state drive to near-capacity and its write performance will decrease dramatically. The reason why lies in the way SSDs and NAND Flash storage work. Filling the drive to capacity is one of the things you should never do with a solid-state drive. A nearly full solid-state drive will have much slower write operations, slowing down your computer.
To maximize your bandwidth speed, see the tips below.
Related Posts
How to LiveStream a Graduation with the Students on Zoom and Faculty on Skype
Video
Ecamm Live
This is a how-to video on a full virtual graduation with no one face to face. Since Shelter In Place (SIP) was still in effect, Mercy HSB had to have their graduation ceremony online. Since I haven’t seen any other tutorials on how to combine the live video of the students at home using Zoom, with the faculty (also at home) presenting through Skype, I created this video hoping this can help others in a similar situation. Even though we had a slight hiccup with one of the pre-recorded videos not playing properly, I’ve been pretty happy with Ecamm Live as a live-streaming platform.
View the Mercy High School Burlingame Graduation Edited Livestream here: https://www.mercyhsb.com/about/graduation2020
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
With the current pandemic and Shelter In Place / Social Distancing rules in place in the United States, graduation ceremonies have shifted to a digital graduation. Here are some tips on how you can help your school enable a virtual graduation.
Related Posts
Using a teleprompter to Keep Eye contact in Zoom
Video
Ecamm Live
How to keep eye contact when interviewing your guests in a Livestream. If you ever watch the news, they would always be looking at the camera even though was a virtual interview. They would do this with a confidence monitor next to large camera, but the camera is several feet away so it isn’t as obvious that their eye isn’t looking at the camera.
Since I only have a few inches on my desk, I’ve been doing this with a 7″ Small HD Monitor and a beam splitter. Although it works great for showing people, my aging eyes can’t read any of the computer text, so I upgraded to a 10.1″ Lilliput Monitor and it is so much better!
Today’s gear:
You can also do this with an iPad: https://support.apple.com/en-us/HT210380
To see how I connected the Teleprompter, see this video: https://youtu.be/KyVPAJ2UUGg
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
A teleprompter, also known as an autocue, is a display device that prompts the person speaking with an electronic visual text of a speech or script. Using a teleprompter is similar to using cue cards. The screen is in front of, and usually below, the lens of a professional video camera, and the words on the screen are reflected to the eyes of the presenter using a sheet of clear glass or a specially prepared beam splitter. Light from the performer passes through the front side of the glass into the lens, while a shroud surrounding the lens and the back side of the glass prevents unwanted light from entering the lens. Mechanically this works in a very similar way to the “Pepper’s Ghost” illusion from classic theatre – an image viewable from one angle but not another – and the concept may have similar origins.
Because the speaker does not need to look down to consult written notes, the teleprompter creates the illusion that the speaker has memorized the speech or is speaking spontaneously, looking directly into the camera lens. Cue cards, on the other hand, are always placed away from the lens axis, making the speaker look at a point beside the camera, which leaves an impression of distraction.
The technology has continued to develop. From the first mechanical paper roll teleprompters used by television presenters and speakers at U.S. political conventions in 1952; to dual glass teleprompters used by TV presenters and for U.S. conventions in 1964; to the computer-based rolls of 1982 and the four-prompter system for U.S. conventions which added a large off-stage confidence monitor and inset lectern monitor in 1996; to the replacement of glass teleprompters at U.K. political conferences by several large off-stage confidence monitors in 2006.
Related Posts
Using NDI Titles with Ecamm Live
Video
Ecamm Live
Another way to get professional titles into Ecamm Live is to use NDI since it supports Alpha Channels. That’s just a fancy way of saying “Transparency”. See my other videos on how I’ve created titles with Apple Keynote.
You’ll need Adobe Premier to make this work since After Effects does not support Alpha Channels over NDI for some reason. In this video, I used two computers, but if you have enough RAM and processing power, you can probably do this on the same computer (although I wouldn’t recommend it).
NDI should be done over Ethernet instead of WiFi, but I didn’t feel like looking for a Cat 6 cable for my laptop. It did surprisingly well though!
There are several advantages in using NDI Titles.
- Simple titles are real-time – Someone can change scores or names on the fly
- Running on separate computers saves ecamm processing for switching and streaming
- Take advantage of Alpha Channels. Green screen hacks just don’t cut it if there is transparency.
Pre-req:
Adobe Premier and the FREE NDI tools for Adobe: https://www.newtek.com/software/adobe-creative-cloud/
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
Network Device Interface (NDI) is a royalty-free software standard developed by NewTek to enable video-compatible products to communicate, deliver, and receive broadcast-quality video in a high-quality, low-latency manner that is frame-accurate and suitable for switching in a live production environment.
NewTek NDI for Adobe Creative Cloud is the only software plugin for Adobe’s industry-standard creative tools that simplifies review and approval processes, facilitates collaboration between teams in different locations, and accelerates live-to-air editing workflows with real-time, renderless playback and preview over IP via NDI, NewTek’s innovative Network Device Interface technology.
NDI is designed to run over existing Gigabit networks, with the NDI codec expected to deliver 1080i HD video at VBR data rates typically around 100 Mbit/s.
Does NDI work over WiFi?
NDI will work over a wireless network but at a reduced frame rate depending on the bandwidth available. As a general rule of thumb 100Mbit is recommended per 1080p video feed. The actual bandwidth used may be much less than 100Mbit (even below 20Mbit) depending on the complexity and size of the video being sent.
Related Posts
Using Skype on a separate Mac with Ecamm and NDI
Video
Ecamm Live
If you want to save processing power on your Mac that is running Ecamm Live, one option is to run your Skype call on another computer on your network. Ideally you will have everything connected via Ethernet, but I tested this with my phone as the Skype caller to my iMac that would host the Skype call. The iMac is on Ethernet. The Macbook Pro was on WiFi running Ecamm Live and accepting the camera source from the other mac via NDI!
- Live streaming Software: Ecamm Live
- Ecamm Latency Documentation
- Latency video test
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
Network Device Interface (NDI) is a royalty-free software standard developed by NewTek to enable video-compatible products to communicate, deliver, and receive broadcast-quality video in a high-quality, low-latency manner that is frame-accurate and suitable for switching in a live production environment.
NDI is designed to run over existing Gigabit networks, with the NDI codec expected to deliver 1080i HD video at VBR data rates typically around 100 Mbit/s.
Does NDI work over WiFi?
NDI will work over a wireless network but at a reduced frame rate depending on the bandwidth available. As a general rule of thumb 100Mbit is recommended per 1080p video feed. The actual bandwidth used may be much less than 100Mbit (even below 20Mbit) depending on the complexity and size of the video being sent.
Related Posts
Ecamm Live Behind the Scenes of a Show with 6 Skype Guests
Video
Ecamm Live
This is a behind the scenes look at a technical dry run of a fundraising campaign. I initially had 9 Skype callers on the 1st dry run because there were a few others involved with the planning. In this video, I go over some tips and tricks I learned with Ecamm Live. Overall, I’ve been super impressed with how well Ecamm Live works. I used to use Boinx MimoLive back in the day and it had an awesome title generator. If Ecamm can incorporate some of the titling, iPhone sharing, preview, and backstage functionality – while keeping it simple, this will just be the ultimate streaming software for Mac!
- This is the output of what they streamed
- Making A Difference
- Live streaming Software: Ecamm Live
- Ecamm Latency Documentation
- Latency video test
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
3 tips to ace your Skype interview
Make a great first impression, show your skills and rock your interview with great Skype features. Skype is consistently working on bringing us new features.
Schedule a call
Be prepared and schedule your call ahead of time. Show your organisational skills and relax knowing that everything is taken care of by Skype.
Blur your background
No need to find that perfect spot in your apartment to have a clean background. Simply turn on background blur and stop worrying about your surroundings.
HD video calling
Up your game with HD calls for clean audio and video. Experience the world's all-class video call experience.
Related Posts
Measure and Adjust Microphone Latency in Ecamm Live
Video
Ecamm Live
If you have a delay in your video, meaning, you hear your audio first before the video, then you are running into latency. No one wants to look like a badly dubbed Kung Fu movie. Here’s how to handle it with the audio preferences in Ecamm Live.
- Live streaming Software: Ecamm Live
- Ecamm Latency Documentation
- Latency video test
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
Audio-to-video synchronization (also known as lip sync, or by the lack of it: lip sync error, lip flap) refers to the relative timing of audio (sound) and video (image) parts during creation, post-production (mixing), transmission, reception and play-back processing. AV synchronization can be an issue in television, videoconferencing, or film.
In industry terminology the lip sync error is expressed as an amount of time the audio departs from perfect synchronization with the video where a positive time number indicates the audio leads the video and a negative number indicates the audio lags the video. This terminology and standardization of the numeric lip sync error is utilized in the professional broadcast industry as evidenced by the various professional papers,standards such as ITU-R BT.1359-1, and other references below.
Digital or analog audio video streams or video files usually contain some sort of synchronization mechanism, either in the form of interleaved video and audio data or by explicit relative timestamping of data. The processing of data must respect the relative data timing by e.g. stretching between or interpolation of received data. If the processing does not respect the AV-sync error, it will increase whenever data gets lost because of transmission errors or because of missing or mis-timed processing.
Why is there a delay on my live-stream?
This is a common question with people who are new to live streaming and to the participants of live streams and they wonder why there’s a 30 second delay when watching themselves on their favorite platform.
Live Streaming Latency
Even live TV has a delay. Most people never see it because they haven’t been on set of a TV studio. There is always a slight delay even with live TV because there is some processing of video before it hits the air. There’s even more delay online!
HDMI Latency
Technically it isn't an HDMI issue, some cameras just don't have the processing power for real-time video.
CPU Processing
Depending on your live-switcher. There is additional processing added to the signal to scale, add graphics, etc to the video that will be sent to your CDN (Content Delivery Nework).
Let's go over a few definitions you should be familiar with for streaming video.
Below are a few streaming terms that everyone should be familiar with even though it can be a little overwhelming. We’ve defined them so you can reference them.
Latency
What most people refer to as "delay" - it's the amount of time that happens in the “real world” and the display of that event on the viewer’s screen.
Buffering
Before a video can play, a certain amount of pre-loading data must be downloaded to stream.
Content Delivery Network (CDN)
A distribution system on the Internet that accelerates the delivery of Web pages, audio, video, and other Internet-based content to users around the world.
Embedded Audio:
The audio signal is sent to the output source through the video signal. This workflow is recommended to avoid audio/video sync issues. For example, a microphone is plugged into a camera instead of the encoder.
Transcoding
the process of decoding an incoming media stream, changing one or more of its parameters (e.g. codec, video size, sampling rate, or encoder capabilities), and re-encoding it with the new parameter settings.
Video Distribution Service (VDS)
though a VDS can take many forms, it is essentially responsible for taking one or more incoming streams of video and audio (from a broadcaster) and presenting it to viewers. This includes what is commonly referred to as a Content Delivery Network.
Why Does Latency Happen?
It comes down to physics. Video has miles to cover between the camera and ultimately the screen of the viewing device. There is a series of technical steps to get it there. After video is captured, they are converted a few seconds at a time into a format that can be sent across the Internet.
That video has to be processed into different qualities so it can be viewed smoothly on different devices from a laptop to an iPhone. All those versions are sent across multiple servers around the country.
Video Capture
Whether you’re using a single camera or a sophisticated video mixing system, taking a live image and turning it into digital signals takes some time. At minimum, it will take at least the duration of a single captured video frame (1/30th of a second for a 30fps frame rate).
More advanced systems such as video mixers will introduce additional latency for decoding, processing, re-encoding, and re-transmitting. Your video capture and processing requirements will determine this value.
Minimum: about 33 milliseconds
Maximum: hundreds of milliseconds
Capture Card
When encoding in software (on a PC or Mac) or using a hardware encoder (Camlink or Magewell card), it takes time to convert the video signal into a compressed format suitable for transmission across the Internet. This latency can range from extremely low (thousandths of a second) to values closer to the duration of a video frame. Changing encoding parameters can lower this value at the expense of encoded video quality.
Minimum: about 1 millisecond
Maximum: about 40-50 milliseconds
Transmission to Facebook or YouTube Servers
The encoded video takes time to transmit over the Internet to a CDN. This latency is affected by the encoded media bitrate (lower bitrate usually means lower latency), the latency and bandwidth of the internet connection, and the proximity (over the Internet) to the CDN.
Minimum: about 5-10 milliseconds
Maximum: hundreds of milliseconds
Server Transcoding
Your viewers will be watching from many kinds of devices (PCs, Macs, tablets, phones, TVs, and set-top boxes) over many types of networks (LAN/WiFi, 5G LTE, 4G, etc.). In order to provide a quality viewing experience across a range of devices, a good streaming provider should provide an optimized stream.
There are two general ways to accomplish this: either the encoder streams multiple quality levels to the CDN (which are directly relayed to viewers), or the encoder sends a single high-quality stream to the CDN, which then transcodes and transrates it to multiple levels. Typically, the transcoding and transrating takes about as long as a “segment” of encoded video (more about segments later), but it can be faster at smaller resolutions and lower bitrates.
Minimum: about 1 second
Maximum: about 10 seconds
Why doesn't Zoom or Skype have a delay?
There’s a difference between”live conferencing” (FaceTime, Skype, Zoom) and “streaming” platforms like YouTube or Facebook Live. The biggest difference is how the content is consumed. Live streaming is typically one to many, vs conferencing is more two-way communication with limited participants.
The difference may seem trivial, but it is very important when the number of participants or viewers scales to a large number.
Collaboration requires specialized coding and computing services to reduce the delay between participants. These do not scale well to large numbers of participants. That is why there’s typically a limit to the amount of participants in conferencing software.
Transmission to your viewers
Each time you watch a live stream or video on demand, streaming protocols are used to deliver data over the internet. These can sit in the application, presentation, and session layers.
Online video delivery uses both streaming protocols and HTTP-based protocols. Streaming protocols like Real-Time Messaging Protocol (RTMP) enable speedy video delivery using dedicated streaming servers, whereas HTTP-based protocols rely on regular web servers to optimize the viewing experience and quickly scale. Finally, a handful of emerging HTTP-based technologies like the Common Media Application Format (CMAF) and Apple’s Low-Latency HLS seek to deliver the best of both options to support low-latency streaming at scale.
How can I reduce latency?
You can do a lot to reduce the latency of your live streams simply by changing encoder settings, internet service providers, or the type of connection.
Some attributes of your total latency may be within your control like bandwidth, encoding, or video format. Your encoder settings, the jitter buffer, the transcoding and transrating profiles, and segment duration can also be configurable. Keep in mind, however, that while a lower latency may sound desirable, it’s important to test these settings with great caution, as each choice may bring about other negative consequences.
Related Posts
Using Animated Lower Thirds in Ecamm Live with SOUND!
Video
Ecamm Live
There’s a few tutorials on how to make animated titles for Ecamm Live, but what about making an animated lower thirds with SOUND! You’ll need to use Apple Keynote (less expensive) or Final Cut Pro (more expensive), and Photoshop (optional) (You can actually import separate PNG elements from any graphics program)
- Live streaming Software: Ecamm Live
- My Camera
- My Lens
- My Capture Card
- My Light
- My Mic
- My Audio Interface
- Video Recorder
In the television industry, a lower third is a graphic overlay placed in the title-safe lower area of the screen, though not necessarily the entire lower third of it, as the name suggests.
In its simplest form, a lower third can just be text overlying the video. Frequently this text is white with a drop shadow to make the words easier to read. A lower third can also contain graphical elements such as boxes, images or shading. Some lower thirds have animated backgrounds and text.
Lower thirds can be created using basic home-video editing software or professional-level equipment. This equipment makes use of video’s alpha channel to determine what parts of the graphic or text should be transparent, allowing the video in the background to show through.
Lower thirds are also often known as “CG” (from character generator) or captions, and sometimes chyrons in North America, due to the popularity of Chyron Corporation‘s Chiron I character generator, an early digital solution developed in the 1970s for rendering lower thirds.[2] Other common terms include superbars (or simply supers) (US), name straps and astons (after Aston Broadcast Systems) (UK).
Video with lower thirds is known as a program as broadcast or dirty. Video without lower thirds is known as a clean feed or textless. For international distribution programs often include textless elements on the master tape: these are all the shots that lower thirds and digital on-screen graphics have been applied to, placed end-to-end so engineers can make a clean master if necessary.