I asked ChatGPT to create some Bash scripts that encapsulate FFmpeq commands to make my video editing easier. I do not do much craze editing but just trimming, cutting-off and so on. Here are mostly used my scripts.

1. Thumbnail creation

I try to create thumbnail out of my video. The script below captures the moment and print two sets of texts on it.

   #!/bin/bash

# Check for correct number of arguments
if [ "$#" -ne 5 ]; then
    echo "Usage: $0 <video_file> <time_frame> <output_thumbnail> <text1> <text2>"
    exit 1
fi

# Assign arguments to variables
VIDEO_FILE="$1"
TIME_FRAME="$2"
OUTPUT_THUMBNAIL="$3"
TEXT1="$4"
TEXT2="$5"

# Create a temporary image file for the thumbnail
TEMP_IMAGE=$(mktemp /tmp/thumbnail.XXXXXX.png)

# Extract the frame at the specified timeframe and scale to 1080p
ffmpeg -ss "$TIME_FRAME" -i "$VIDEO_FILE" -vframes 1 -q:v 2 -vf "scale=1280:720" "$TEMP_IMAGE"

# Add two texts to the image using ffmpeg
ffmpeg -i "$TEMP_IMAGE" -vf "drawtext=text='$TEXT1':fontcolor=yellow:fontsize=62:box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)/2:y=(h-text_h)/2+140, drawtext=text='$TEXT2':fontcolor=white:fontsize=96:box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)/2:y=(h-text_h)/2+220" -y "$OUTPUT_THUMBNAIL"

# Clean up the temporary image file
rm "$TEMP_IMAGE"

# Compress the thumbnail to ensure it's less than 2MB
convert "$OUTPUT_THUMBNAIL" -quality 85 -resize 1920x1080\> "$OUTPUT_THUMBNAIL"

echo "Thumbnail created and saved as $OUTPUT_THUMBNAIL"

Created thumbnails can be seen from my YouTube channel.

2. Cutting-off

This scripts cut off not necessary parts of video. In my case, I cut off cross walk waiting time during my commute. This takes one text file with time ranges as an input and it is possible to enter multiple time ranges. These ranges are ones you would like to keep.

00:01:00 00:02:00
00:05:00 00:10:00
00:12:00 00:12:10

Scripts give blurring around 1 seconds of beginning and ending of segments for smooth transitions.

3. Blurring Faces

This script one more package called Deface and it uses machine running to recognize face and blurring them. Since this package removes sound. This script saves sound first and combines sound back after blurring the face.

#!/bin/bash

# Check if the correct number of arguments is provided
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 <input_video_file>"
    exit 1
fi

# Assign argument to variable
INPUT_VIDEO=$1
EXTRACTED_AUDIO="${INPUT_VIDEO%.*}_extracted_audio.mp3"
ANONYMIZED_VIDEO="${INPUT_VIDEO%.*}_anonymized.mp4"
FINAL_OUTPUT="${INPUT_VIDEO%.*}_blurred.mp4"
STABILIZED_OUTPUT="${FINAL_OUTPUT%.*}_final.mp4"
TRANSFORM_FILE="${FINAL_OUTPUT%.*}_transform.trf"

# Email details
email_subject="Video Processing Completed"
email_body="The video processing job has been completed."
recipient_email="terry.bae@gmail.com"  # Replace with your email address

# Activate the virtual environment
source ~/venv/bin/activate

# Extract audio from the original video
echo "Extracting audio from the original video..."
ffmpeg  -hwaccel auto -i "$INPUT_VIDEO" -q:a 0 -map a "$EXTRACTED_AUDIO"
if [[ $? -ne 0 ]] || [ ! -s "$EXTRACTED_AUDIO" ]]; then
    echo "Error: Failed to extract audio from the video or audio is empty."
    deactivate
    exit 1
fi

# Blur faces on the input video using deface
echo "Blurring faces on the input video..."
deface --scale 2560x1440 --thresh 0.5 "$INPUT_VIDEO"
if [[ $? -ne 0 ]] || [ ! -s "$ANONYMIZED_VIDEO" ]]; then
    echo "Error: Failed to blur faces on the input video or anonymized video is empty."
    deactivate
    exit 1
fi

# Add the extracted audio back to the anonymized video
echo "Adding extracted audio back to the anonymized video..."
ffmpeg -hwaccel auto -i "$ANONYMIZED_VIDEO" -i "$EXTRACTED_AUDIO" -c copy -map 0:v:0 -map 1:a:0 "$FINAL_OUTPUT"
if [[ $? -ne 0 ]] || [ ! -s "$FINAL_OUTPUT" ]]; then
    echo "Error: Failed to add audio to the anonymized video or final video is empty."
    deactivate
    exit 1
fi

# Step 1: Generate the stabilization transform file
# echo "Generating stabilization transform file..."
# 
# ffmpeg  -hwaccel auto -i "$FINAL_OUTPUT" -vf vidstabdetect=shakiness=5:accuracy=15:result="$TRANSFORM_FILE" -f null -
#  if [[ $? -ne 0 ]]; then
#    echo "Error: Failed to generate stabilization transform file."
#    deactivate
#    exit 1
# fi

# Step 2: Apply the stabilization transform to the video
# echo "Applying stabilization to the final output video..."
# ffmpeg -hwaccel auto -i "$FINAL_OUTPUT" -vf vidstabtransform=input="$TRANSFORM_FILE",unsharp=5:5:0.8:3:3:0.4 -vcodec h264_videotoolbox -b:v 5000k -acodec copy "$STABILIZED_OUTPUT"
# if [[ $? -ne 0 ]] || [ ! -s "$STABILIZED_OUTPUT" ]]; then
#    echo "Error: Failed to stabilize the video or stabilized video is empty."
#    deactivate
#    exit 1
# fi

Clean up intermediate files
echo "Cleaning up intermediate files..."
rm "$EXTRACTED_AUDIO" "$TRANSFORM_FILE"

# Deactivate the virtual environment
deactivate

# Send an email notification
# echo -e "$email_body" | mail -s "$email_subject" "$recipient_email"

# echo "Process completed successfully. Stabilized output: $STABILIZED_OUTPUT"

End part of the script has logic for stabilize video. They are commented out since I do not use them any more.

4. Misc Editing

Other tasks like concatenating, trimming, extracting sound and combining sound can be done by following script.

#!/bin/bash

# Function to display menu and get user choice
function show_menu {
    echo "Select a command to execute:"
    echo "1) Concatenate: ffmpeg -f concat -i <input_text_file> -c copy <output_video_file>"
    echo "2) Extract Sound: ffmpeg -i <input_video_file> -q:a 0 -map a <output_sound_file>"
    echo "3) Combine Sound: ffmpeg -i <input_video_file1> -i <input_sound_file2> -c copy -map 0:v:0 -map 1:a:0 <output_video_file>"
    echo "4) Trim: ffmpeg -ss <start_time> -to <end_time> -i <input_video_file> -c copy <output_video_file>"
    echo "5) Mute: ffmpeg -i <input_video_file> -vcodec copy -an <output_video_file>"
    read -p "Enter choice [1-5]: " choice
}

show_menu

# Execute the selected command
case $choice in
    1)
        read -p "Enter input text file: " INPUT_FILE
        read -p "Enter output video file: " OUTPUT_FILE
        echo "Concatenating files..."
        ffmpeg -hwaccel auto -f concat -i "$INPUT_FILE" -c copy "$OUTPUT_FILE"
        ;;
    2)
        read -p "Enter input video file: " INPUT_FILE
        read -p "Enter output sound file: " OUTPUT_FILE
        echo "Extracting sound from video..."
        ffmpeg -hwaccel auto -i "$INPUT_FILE" -q:a 0 -map a "$OUTPUT_FILE"
        ;;
    3)
        read -p "Enter first input video file: " INPUT_FILE1
        read -p "Enter second input sound file: " INPUT_FILE2
        read -p "Enter output video file: " OUTPUT_FILE
        echo "Combining video and sound..."
        ffmpeg -hwaccel auto -i "$INPUT_FILE1" -i "$INPUT_FILE2" -c copy -map 0:v:0 -map 1:a:0 "$OUTPUT_FILE"
        ;;
    4)
        read -p "Enter start time 00:00:00 : " START_TIME
        read -p "Enter end time 00:00:00 : " END_TIME
        read -p "Enter input video file: " INPUT_FILE
        read -p "Enter output video file: " OUTPUT_FILE
        echo "Trimming video..."
        ffmpeg -hwaccel auto -ss "$START_TIME" -to "$END_TIME" -i "$INPUT_FILE" -c copy "$OUTPUT_FILE"
        ;;
    5)
        read -p "Enter input video file: " INPUT_FILE
        read -p "Enter output video file: " OUTPUT_FILE
        echo "Muting video..."
	ffmpeg -hwaccel auto -i "$INPUT_FILE" -vcodec copy -an "$OUTPUT_FILE" 
	;;
    *)
        echo "Invalid choice."
        exit 1
        ;;
esac

echo "Command executed successfully."