This guide aims to give users an overview of the recommended methods of recording your work in Fragment:Flow.
Unfortunately, due to certain performance challenges it is not possible record footage directly from Fragment:Flow at present – however, quality results can be achieved by using the hardware or software methods detailed below. Please be aware that in the case of software capture your results will vary according to the power of your system, so you may need to experiment to find the optimal settings for your machine/workflow.
1 - The Optimal Method: Dedicated Hardware
The main limitation when it comes to capture software such as OBS is that the act of encoding competes for resources and this extra strain on your system can manifest itself in lower frame-rates or issues during capture. Indeed, video encoding is a fairly intensive task and the likelihood of problems only increases if you run multiple apps simultaneously as part of your work-flow (Resolume, Touch Designer etc). For this reason, the very best option for recording flawless, high-quality footage in real-time is a dedicated hardware device. While a second machine and a consumer-level game capture device should work, I would highly recommend investing in a stand-alone device such as those made by Black Magic or Atomos if possible.
Personally I use an Atomos Ninja Inferno which I love, but if buying today I would probably opt for the smaller, cheaper but equally feature-rich Ninja V. My Ninja Inferno is entirely independent, portable and records flawless 4K video @ 60 fps onto a standard SSD slotted into the back (tested/approved drives can be found via their website).
Setting it up is a simple matter of attaching a HDMI cable to a spare port on your GPU and designating the device as either a secondary monitor or a clone of an existing display via the Nvidia control panel. The relevant section can be found under “Set up multiple displays” as shown in the screen-shot below. In this case I have the Inferno (2) cloned to my 2nd 1080p monitor (3) and the device automatically detects and sets itself to the correct resolution:
From here you can simply arrange your Fragment:Flow window on the display and press record via the device’s touchscreen. Audio is streamed to the unit through HDMI or an optional and frankly extortionately expensive break-out cable. Once recorded, you can power down the device, slide out the SSD, attach it to your PC using a SSD dock or SATA USB adapter and transfer the files.
The Atomos devices offer professional, high-quality codecs in the form of ProRes and DnxHR/DnxHD. These are visually lossless and perform much better than H.264 in editing. The only drawback is the sheer size of the files (around 15-25gb for 5 minutes of 1080p footage in my experience, much higher at 4K). I’d advise sticking with either ProRes or DnxHR/DnxHD throughout the editing process to minimise any degradation of the footage, but clearly you’ll need to invest in some serious storage if you adopt this method (and spawn files as I do). Once I have the finished video as a DnxHR/DnxHD master file, I then use FFMPEG or Handbrake to prepare H.264/H.265 copies for online distribution.
In short, it’s a significant investment but I’d highly recommend these devices to anyone working with real-time visuals, and they’re the perfect choice for use with Fragment:Flow.
2 - Same Machine Software Capture with OBS & Spout
This method is less than ideal due to the performance impact outlined above and the fact that high-quality codecs like ProRes/DnxHR aren’t supported (as far as I know). That said, it can still yield decent results if your hardware provides sufficient headroom. Indeed, I tend to opt for OBS for my daily capture tasks and demo videos due to the impracticality of working with the huge files produced by my Atomos.
If you’re not already a user, the first thing to do is to download and install OBS from here (it’s free and open source).
There are 2 options when it comes to capturing Fragment:Flow’s output with OBS. The first is to simply use the stock window or game capture methods provided by OBS. Once you’ve added a “window/game capture” source via the sources section, double-clicking it will bring up a menu where you can select FF’s window from the list.
The down-side of this method is that the texture size (resolution) that OBS receives is tied to the physical size of Fragment:Flow’s display window on screen. This may be fine for those with multiple displays who can run the output at full-screen at the desired resolution, or those on a 4K screen who are happy to record at 1080p (simply set the output window size and texture resolution to 1080p via FF’s display section). However, those on single displays who wish to record at higher resolutions unhindered by window size should use the Spoutcam method below.
The Spout method is my preferred one, so I’ll cover this in more detail. The steps regarding setting the correct Base (Canvas) Resolution and encoder settings will be applicable in either case.
Using the Spout2 plugin for OBS
Step 1: Firstly, with OBS installed download and install the Spout2 plugin via this link:
Step 2: In a fresh OBS scene head to the sources section and click the + symbol to add a new source. Select “Spout2 Capture” from the list.
Step 3: Double-click on the newly created Spout2 Capture source to bring up its properties window. Here you should be able to select Fragment:Flow from the Spout senders list:
Step 4: With Spout the texture that OBS receives is always equal to Fragment:Flow’s internal texture resolution, as defined in Display > Texture Dimensions:
You may find that there is a mismatch and the image in OBS is either too small or part of it is cut off. In the example below, FF is broadcasting at 4k but the OBS canvas is set to 1080p.
To solve this head to the Settings section on the right:
Select the “Video” section and set both the Base (Canvas) Resolution and Output (Scaled) Resolution to the resolution that FF is running at. Alternatively, you could adjust the resolution in FF – the main thing is that they both match and are equal to the resolution that you wish to record at.
Step 5: With our dimensions correctly set, the only thing that remains to be done is to adjust the encoder settings to try to improve the quality of the final recording. Again, head to the Settings window on the right and go to Output > Recording. Here you can experiment with the bitrate settings to find the best balance for your system. I’ve found that the Nvidia NVENC encoder performs best (encoding is processed on the GPU rather than the CPU), so try that first if it’s available to you. I opt for a relatively high CBR of 50000 Kbps to minimize the visible compression, but be aware that setting this too high can cause recording and playback issues. I’m not an expert at encoding video and it’s really a matter of experimenting to find the best balance for your system, but these are the values that I’ve settled on after extensive experimentation on my RTX 2080 machine:
Step 6: At this stage you should be ready to record. If you have any performance issues, try reducing the CBR value or as a last resort, reducing the resolution that you are working at. If you find that you are still struggling to achieve satisfactory results, I’d highly recommend the hardware approach outlined above.
I hope that helps. As always, please feel free to contact me if you need any help or advice.