Automate YouTube Video Creation

Background

As some of you may know, I run two YouTube channels. One is Robinson Handy and Technology Services, which I talk about tech, do it yourself repairs and builds, and a bit more. The other is Kenny Ram Dash Cam, which is where I upload dash cam videos of my travels and bad driving habits of others that I see.

One thing that you eventually find out from running a YouTube channel, is that making videos requires a bit of patience, lot of disk storage, and editing videos is not as easy as some of the professionals make it out to be. Thus where automation comes in to speed up the content creation process.

WorkFlow Of Others

I have been doing videos for YouTube for about 4 years now and it has been a lot to learn. Researching what other videographers do and how they go about making their videos has been helpful. For me, I wanted to find a more efficient way of creating content that also allowed me to be able to use my time more wisely. As I mentioned in a previous blog post about switching to a static website to reduce the amount of time that I spent on maintaining the website, the video creation process would follow along those same lines... simplify the process.

The way that I made videos, they are done in clips. When I make a mistake in a clip, I redo it and delete the clip that has the mistake. This contradicts what most videographers and producers suggest, as they say to leave the camera rolling and cut out the bad parts in post production. I have tried that method and it works when you are willing to put in the time to do the editing or have a dedicated team that will do the editing for you. For me, that was time consuming and took away from the ability to create content, which resulted in not being consistent in posting new content, which resulted in slow growth on my YouTube channel. Talk about cause and effect.

Establishing My Own Workflow

Now that I have these individual clips, I would then have to load them to my computer and combine them using Kdenlive. For those that are not familiar with Kdenlive, it is very prone to crashing when you least expect it and thus you have to save and save often. While in Kdenlive, would then have to edit out the oops and bad parts, which means that you have to go through the entire video. Thus a 10 minute video could take an hour or two to edit depending on the number of mistakes that were in the video. Again... time consuming.

After doing some research, I found that Kdenlive used a program called Melt to render videos. Kdenlive creates a XML file that is then passed to Melt when rendering is called. Looking into that led me down a path where I found out about ffmpeg to create videos. Looking into ffmpeg more I found that it can do a number of options and how to make it take multiple videos and put them into a single video and a bit more.

I got to thinking... since most of the time I am recording these individual clips, I could create a Shell Script that can merge the clips into a single video. Thus I created a shell script that could do all of that and bit more.

With Kdenlive, you can create an archive of a video project so that all of the clips, audio, effects, and more can be saved as a backup. In the future, if you needed to regenerate or edit that video project, you can unpackage the contents of the archive and make the necessary edits in Kdenlive that you needed.

Script Structure

Following that same strategy, I set up the script to do the following in the order listed:

  • Check that the script is not already running. Running the script from two different processes would result in errors
  • Loop through all of the tar files in the incoming directory
  • Check that there is at least 95% free disk space to prevent filling up the hard drive and script erroring out because of lack of disk space
  • Clean the working directory
  • Uncompress (if needed) and extract the contents of the tar file into a working directory
  • Since the name of the video is the file name of the tar file, the tar file name is parsed and used to create the video title so that it can be used to named the output files
  • Look at the contents of the tar file to determine what channel the video is being created for.
  • Set the branding text and other options based on channel and other meta data files
  • Create an unbranded copy of the video for the archive. This footage can be used for b-roll in the future if needed
  • Create the branded version of the video ready for upload to YouTube and other social media websites
  • Create thumbnail from the opening seconds of the video
  • Create a tarball of the unbranded video, thumbnail, and meta data files. Compress and move tarball to the archive folder
  • Moved branded video and thumbnail to the upload directory
  • Move the original tarball to a processed directory

The entire process above is done via a for loop and will be schedule to run via a cron job. The process is the same for both of my YouTube channels, thus I am able to use a single script for this purpose. This also run on my media server because it is on 24/7. Also frees up my main computer for me to do other tasks with, like writing the blog post, or to be able to turn it off when not in use, thus saving energy.

Folder Structure

I based the folder structure for the script on an application that I supported when I worked as a Level 2 Production Support Analyst (or Level 2 Operations). The application was a data warehousing application that received "data feeds", which were compressed tar files, from another application. The feeds would be placed in an "incoming" directory. A shell script would run periodically to process any data feeds that were in the directory. When the script saw that there were feeds in the incoming directory, it would move each feed file to the working directory, uncompress the tar file, extract the tar file contents, load the data to the database, recompress the original tar file, remove the contents of thew working directory, and then move the tar file to the archive directory. This process would repeat until all the tar files in the incoming directory had been processed.

The folder structure in my script is similar in nature with each folder having a specified purpose:

  • incoming - tar files that have video in clips
  • working - contains files that were extracted by the script from the tar file that is currently being worked on
  • processed - original tar files that were previously in the incoming directory. Files are moved here so that they are not reprocessed
  • upload - videos and thumbnails that are ready to be uploaded to YouTube and other social media sites
  • archive - contains compressed tar files that contain the unbranded video file and other meta data files

Conclusion

I also want to be able to have the videos automatically be uploaded to YouTube after rendering, but that item is a bit more complicated because it requires authentication, using Google APIs, and the limitation on the number of videos that you can upload in a day with the Google API.

For those of you that are interested in the script and wanting to use it for your channel, it is located in the "videos" folder in my Ubuntu Automation repository.

Posted: 2022-04-07
Author: Kenny Robinson, @almostengr