Jump to content

HelpYourVideos

Full Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

18 Good

About HelpYourVideos

  • Rank
    Member

Profile Information

  • Location
    Germany
  • EUC
    What

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not sure exactly what you mean by this. You can find some vids by typing 360 vr. You have to use the YouTube app but you don’t necessarily need a vr headset though most support it. There’s also a cool but somewhat unrelated YouTube feature of spatial sound called “ambisonics”. anyway here’s a link to a example vr video (make sure you open this in the YouTube app)
  2. Don’t forget, YouTube supports 360 video. If you mux audio to an unedited video and keep the necessary metadata, I think you can upload and watch the whole 360 using the mouse to look around or even by the gyroscope in the phone. Have you tried it?
  3. For your use case, agreed. However, don’t forget to beef up that storage capacity as even a simple Prores 422 targets 200Mbit/s on 1080p30. Can only imagine the file size of a 4k60 output. Likely ~100GB+ for a 20 min video? Edit: Found the answer to the bitrate question in a pdf published by Apple.
  4. Apple Prores and the other intermediate so-called “mezzanine” formats are not lossless. They are considered “visually lossless” but they rewrite all the data streams. Essentially what they do is decode the frame then export at massive bitrates, typically using 4:4:4, 12 bit color with limited compression. The main utility for intermediate codecs are if you need to transport the file between different programs to apply effects before the final export. (color grading, VFX, etc). When you want to apply effects, you do have to reencode. You should try to do it all in one program like Davinci,
  5. Once you’re feeling a little more comfortable with your understanding of codecs, containers etc. You can take the next step to my favorite program Avidemux. This allows you to losslessly trim videos on the key frames. So you can losslessly remove sections of those 4K videos you don’t want, or if you have a video with all the exact same parameters, (likely if you record on the same settings with the same camera) you can losslessly concatenate two separate videos and remove sections etc. There should be some helpful videos out there but some cliffs notes are: You should navigate the
  6. Apologies, I’ve never personally used mp4box, just heard about it’s function, so I can’t help troubleshoot there but I have experience with mkvtoolnix and ffmpeg and can help there. The reason it exports so quickly is because it is writing a bit-exact copy of both files, so it exports at the write speed of your HDD or SSD. Instead of reencoding and making all those compute intensive decisions about compression, the only thing your cpu has to add is the containers’ instructions (mp4, mkv, wmv, avi). You might consider downloading “MediaInfo” then you can examine what exactly is contain
  7. Okay, add ffmpeg to your %Path% so you can call on the program from any directory. After the ffmpeg -i you can just drag the file(s) onto command prompt/terminal so there’s no errors in typing the location of the files. Ffmpeg is very powerful indeed, for encoding also not just muxing. But make sure your output is correct because mkvtoolnix should not be giving errors.. did you drag the video in, then drag audio, leaving the default selection of “add as new source file to the current multiple settings” finally click “start multiplex”? Here is a link that details what codecs of video and
  8. Once you have a completed audio track (voiceover+Music+edits), export only the audio (aac is compatible with mp4 and mkv, ogg only mkv) This video might help. Then you will download either mkvtoolnix or mp4box or use ffmpeg command line (all free/open source) to “mux” the original video file with the audio you just saved. In mkvtoolnix it’s as easy as dragging the video in, then the audio and click start multiplexing to save, it will be done instantly.
  9. The term you're looking for is "Voice Over". Most Non-Linear Editor (NLE) video software such as shotcut, premiere pro, final cut pro, davinci resolve, etc. will allow you to enable a microphone and record sound while the video is playing. https://www.shotcut.org/blog/video-tutorial-voiceover/ Alternatively, its easy enough to voice record in an audio only software like audacity while watching the video through any video player like vlc or mpv. Then you can examine the waveform and trim it to when you start talking and align the audio manually with the video in the NLE. It is recommended you r
  10. If you can, export from insta studio with an absurdly high bitrate, say h264 slow crf 12 or 40-80Mbps. Then delete the file once you’re done editing. I would second Meserias, shotcut is free, uses ffmpeg backend which allows for high quality, numerous output options. Shotcut uses a MLT file to store and translate the edits you made in the application to command line instructions for ffmpeg to apply. Which means as you apply edits, it saves instantly and takes very little storage. There is a very active developer and forum. I think they recently added proxy editing. And if I’m reading correc
  11. One thing most of you guys have not mentioned, which is probably the most important factor regarding video quality is called "generation loss". Each time you export (encode) a video with a "lossy" codec you are changing data and losing quality, almost no matter the bitrate. For this reason its important to take your source video apply all the edits in and encode only one time. That said, if you exported those two 30 minute clips with the same parameters, you should be able to losslessly join them also called "concatenate" using a software other than openshot. This avoids encoding a secon
  12. There is no way to avoid the black level being distorted. Youtube converts rec709 flagged video from 0(black)-255(white) to 16-235 as is the limit for 8-bit shown in the diagram linked below. If you want to test this you can add a chrome extension called colorzilla to use an eyedropper tool on your video and see that the black is at 16. https://imgur.com/a/fb5z7GN
  13. Hey, I assume you're looking for advice on improving video quality. In no particular order here's some things you can change to improve your workflow and quality. Two-pass encoding with a defined bitrate will typically result in a worse outcome than picking a low CRF value (say 18-12, usually 16) with a slow (slower, or very slow) preset. What this does is allows the bitrate to fluctuate to accommodate the variation by allocating data when and where it is needed; which means some portions may use ~2kbps and others ~60kbps, for example. Another thing that will improve quality is rend
×
×
  • Create New...