This is Hybrid 3D.
Built from AI depth + custom stereo logic β
Designed for cinema in VR.
Click to download or support the project π
VisionDepth3D is licensed under a proprietary, no-derivatives license.
Forking, redistributing, modifying, or creating derivative works is strictly prohibited.
- GPU-accelerated stereo warping: per-pixel, depth-aware parallax shifting (CUDA + PyTorch)
- Built on the VisionDepth3D Method, including:
- Depth shaping (Pop Controls): percentile stretch + subject recenter + curve shaping for natural separation
- Subject-anchored convergence: EMA-stabilized zero-parallax tracking for comfort and consistency
- Dynamic stereo scaling (IPD): scene-aware intensity that adapts to depth variance
- Edge-aware masking + feathering: suppress halos and clean up subject boundaries
- Floating window (DFW): cinematic edge protection to prevent window violations
- Occlusion healing: fills stereo gaps and reduces edge artifacts
- Live preview + diagnostics: anaglyph, SBS, heatmaps, edge/mask inspection, stereo difference views
- Clip-range rendering for fast testing on difficult scenes before full renders
- Export formats: Half-SBS, Full-SBS, VR (SBS 1440Γ1600 per eye), Anaglyph, Passive Interlaced
- Encoding pipeline: FFmpeg with CPU and hardware encoders (NVENC/AMF/QSV) plus quality controls (CRF/CQ)
Result: A production-ready 2D-to-3D engine with real-time tuning tools, stability features, and flexible export formats for VR and cinema workflows.
- 25+ supported depth models (ZoeDepth, MiDaS, DPT/BEiT, DINOv2, DepthPro, Depth Anything V1/V2, Distill-Any-Depth, Marigold, and more)
- One-click model switching with auto-download + local caching
- Multiple inference backends:
- PyTorch (Transformers / TorchHub)
- ONNXRuntime (CUDA / TensorRT)
- Diffusers FP16 (for diffusion-based depth)
- Image + video + batch workflows:
- Single image
- Image folder batch
- Full video depth rendering
- Video folder batch
- Optional high precision output (when supported) for cleaner disparity and stronger stability in post
- Built-in preview modes + colormaps for fast inspection
- Stability + safety features: resolution/shape handling, codec probing, and fallback behavior to avoid common crashes
Result: Fast, flexible depth generation for everything from quick tests to full-length depth videos ready for stereo conversion.
- Blend two depth sources into one cleaner, more stable depth map/video
- Frames or video mode:
- Pair two PNG frame folders
- Or pair two depth videos
- Live preview + scrubber: side-by-side (Base vs Blended) with fast frame navigation
- Edge-focused blend controls:
- White strength injection
- Feather blur smoothing
- CLAHE contrast shaping
- Bilateral edge-preserving denoise
- Normalization back to base for consistent depth scale
- Batch output options: overwrite base, output to new folder, or export a blended video
Result: Cleaner edges, stronger subject separation, and more consistent parallax behavior across full sequences.
- RIFE interpolation (ONNX): 2Γ / 4Γ / 8Γ FPS generation with GPU acceleration
- Real-ESRGAN upscaling (ONNX): high-quality super-resolution with optional FP16
- Two processing pipelines:
- Merged (stable, low memory)
- Threaded (higher throughput, better utilization)
- Full video workflow support:
- Optional scene splitting for long videos
- Rebuild output with correct resolution, FPS, and encoding settings
- Render feedback: progress, FPS, ETA, logs, and safe cancel handling
Result: Turn low-res or low-FPS sources into clean, smooth outputs built for VR playback and high refresh displays.
The Live 3D tab brings realtime stereo conversion into VisionDepth3D. It allows users to capture a camera, capture card, or screen source, estimate depth live, and preview a stereoscopic 3D output without waiting for a full render.
-
Realtime capture sources:
- Camera / capture card input
- Screen 1 / Screen 2 desktop capture
- Configurable capture resolution and FPS
-
Depth model selection:
- Uses the same supported model list as the Depth Engine
- Lightweight model defaults for live performance
- Optional FP16 acceleration where supported
-
Live VisionDepth3D stereo controls:
- Foreground / midground / background shift
- Max pixel shift
- Parallax balance
- Depth pop gamma
- Subject tracking
- Dynamic convergence
- Edge masking
- Feathering
- Floating window support
-
Preview and output controls:
- Start directly in SBS mode
- Adjustable preview resolution
- Optional preview window disabling
- Optional HTTP stream field for future streaming workflows
-
Designed for fast tuning:
- Test stereo settings before rendering
- Check depth direction and pop-out behavior
- Compare depth models quickly
- Tune comfort settings before full video export
Result: A realtime VisionDepth3D sandbox for testing depth models, stereo settings, screen capture, and live 2D-to-3D conversion before committing to final renders.
- Multi-tab interface with persistent settings
- Help menu
- Pause, resume, and cancel for long GPU jobs
- Multi-language UI support (EN, FR, ES, DE, JA)
- Hardware encoding options integrated into export workflow
- Stereo formats: Half-SBS, Full-SBS, VR180, Anaglyph, Passive Interlaced
- Aspect ratios: 16:9, 2.39:1, 2.76:1, 4:3, 21:9, 1:1, 2.35:1
- Containers: MP4, MKV, AVI
- Encoders: CPU + FFmpeg hardware options (NVENC/AMF/QSV) when available
- βοΈ Python 3.13
- βοΈ Git, if cloning the repository
- βοΈ Conda, optional but recommended
- βοΈ NVIDIA GPU recommended for best performance
- βοΈ CUDA 12.8 tested
- βοΈ AMD / Intel GPU support on Windows through DirectML
- βοΈ CPU fallback available, but much slower
You can install VisionDepth3D in one of two ways:
- Go to the official VisionDepth3D GitHub page.
- Click the green Code button.
- Click Download ZIP.
- Extract the ZIP to a folder, for example:
C:\VisionDepth3D-main
Open Command Prompt or Anaconda Prompt and run:
git clone https://github.com/VisionDepth/VisionDepth3D.git
cd VisionDepth3DIf you downloaded the ZIP instead, skip git clone and use cd to enter the extracted folder:
cd C:\VisionDepth3D-mainOpen Command Prompt:
cd C:\VisionDepth3D-main
pip install -r requirements.txtThen continue to Step 3: Install PyTorch for Your GPU.
Conda is recommended because it keeps VisionDepth3D dependencies isolated from the rest of your system.
Open Anaconda Prompt and run:
git clone https://github.com/VisionDepth/VisionDepth3D.git
cd VisionDepth3D
conda create -n VD3D python=3.13 -y
conda activate VD3D
pip install -r requirements.txtIf you downloaded the ZIP instead of cloning:
cd C:\VisionDepth3D-main
conda create -n VD3D python=3.13 -y
conda activate VD3D
pip install -r requirements.txtVisionDepth3D uses PyTorch for AI depth models and GPU processing.
Different GPU types need different PyTorch installs.
If you have an NVIDIA GPU, install the CUDA build of PyTorch.
First, check your NVIDIA driver/CUDA support:
nvidia-smiYou can also check CUDA Toolkit if installed:
nvcc --versionThen install PyTorch using the official PyTorch selector:
π https://pytorch.org/get-started/locally/
Recommended selector options:
OS: Windows or Linux
Package: Pip
Language: Python
Compute Platform: CUDA
Example for CUDA 12.8:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128If your system uses a different CUDA version, use the command from the official PyTorch website instead. PyTorchβs install selector is the safest source for the correct command.
If you have an AMD GPU or Intel GPU on Windows, install PyTorch DirectML.
DirectML allows PyTorch acceleration on supported non-NVIDIA GPUs through Windows DirectX 12.
Run this inside your VisionDepth3D environment:
pip install torch-directmlUse this option for:
- AMD Radeon GPUs on Windows
- Intel Arc / Intel integrated GPUs on Windows
- Systems without NVIDIA CUDA support
Important:
- DirectML is usually slower than NVIDIA CUDA.
- Some models or operations may fall back to CPU.
- If DirectML gives issues, use CPU mode as a fallback.
- Do not install CUDA PyTorch for AMD GPUs on Windows.
If you do not have a supported GPU, install the CPU version of PyTorch.
Use the official PyTorch selector:
π https://pytorch.org/get-started/locally/
Recommended selector options:
OS: Windows / Linux / Mac
Package: Pip
Language: Python
Compute Platform: CPU
CPU mode works, but depth generation, upscaling, interpolation, and 3D processing will be much slower.
After installing PyTorch, test it.
python -c "import torch; print('CUDA:', torch.cuda.is_available()); print(torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'No CUDA GPU found')"python -c "import torch_directml; d=torch_directml.device(); print('DirectML device:', d)"python -c "import torch; print('PyTorch installed:', torch.__version__)"After all dependencies are installed, launch VisionDepth3D with the correct script for your setup.
Windows Conda:
Start_VD3D_Conda.batWindows standard install:
Start_VD3D_Windows.batLinux:
Start_VD3D_Linux.batOr run directly:
python app.pyIf you are using Conda, make sure your batch script activates the correct environment:
conda activate VD3D
python app.pyIf you are using standard pip without Conda, make sure Python is available in PATH:
python app.pyCongrats, you have successfully installed VisionDepth3D.
Recommended setup:
- NVIDIA users: CUDA PyTorch
- AMD / Intel Windows users:
torch-directml - No GPU users: CPU PyTorch
For the best performance, an NVIDIA CUDA GPU is recommended.
-
Backup Your Weights
Move yourweightsfolder out of the oldVisionDepth3D-maindirectory. -
Download the Latest Version
Delete the old folder and extract or clone the updated version ofVisionDepth3D-main. -
Restore Weights Folder
Place yourweightsfolder back inside the newly downloaded main directory:
VisionDepth3D-main/weights -
Update the Path in Startup Scripts
Open the startup script matching your platform:Start_VD3D_Windows.batStart_VD3D_Conda.batStart_VD3D_Linux.sh
Edit the script and replace any old folder path with the new path to your updated
VisionDepth3D-main. -
Activate Conda Environment (if needed)
If you are using the Conda starter script:- Open a terminal or Anaconda Prompt.
- Run:
cd path/to/updated/VisionDepth3D-main Start_VD3D_Conda.bat
-
Launch the App
Once everything is in place, run the appropriate script or shortcut to launch VisionDepth3D with your latest settings.
Note: If you customized any configuration, backup those files before replacing folders. and if you run into import errors
pip install -r requirements.txtinside opened terminal and that will fix any dependancie errors
VisionDepth3D includes a full professional user manual with workflows, tuning guides, and advanced features.
π Start here: UserGuide.md
If you're new, begin with:
β’ Depth Estimation β
β’ Depth Blender β
β’ 3D Generator β
β’ Preview & Clip Testing β
β’ Final Render
This tool is being developed by a solo dev with nightly grind energy (π ~4 hours a night). If you find it helpful, let me know β feedback, bug reports, and feature ideas are always welcome!
Thank You!
A heartfelt thank you to all the researchers, developers, and contributors behind the incredible depth estimation models and open-source tools used in this project. Your dedication, innovation, and generosity have made it possible to explore the frontiers of 3D rendering and video processing. Your work continues to inspire and empower developers like me to build transformative, creative applications.
| Model Name (UI) | Creator / Organization | Model ID / Repository |
|---|---|---|
| Marigold Depth v1.1 (Diffusers) | PRS ETH | prs-eth/marigold-depth-v1-1 |
| Marigold Depth v1.0 | PRS ETH | prs-eth/marigold-depth-v1-0 |
| Distill-Any-Depth Large (xingyang1) | xingyang1 | xingyang1/Distill-Any-Depth-Large-hf |
| Distill-Any-Depth Small (xingyang1) | xingyang1 | xingyang1/Distill-Any-Depth-Small-hf |
| Video Depth Anything Large | Depth Anything Team | depth-anything/Video-Depth-Anything-Large |
| Video Depth Anything Small | Depth Anything Team | depth-anything/Video-Depth-Anything-Small |
| Video Depth Anything (ONNX) | Depth Anything Team | Bundled ONNX backend (onnx:VideoDepthAnything) |
| Distill-Any-Depth Large (ONNX) | xingyang1 | Bundled ONNX backend (onnx:DistillAnyDepthLarge) |
| Distill-Any-Depth Base (ONNX) | xingyang1 | Bundled ONNX backend (onnx:DistillAnyDepthBase) |
| Distill-Any-Depth Small (ONNX) | xingyang1 | Bundled ONNX backend (onnx:DistillAnyDepthSmall) |
| DA3METRIC-LARGE | Depth Anything Team | depth-anything/DA3METRIC-LARGE |
| DA3MONO-LARGE | Depth Anything Team | depth-anything/DA3MONO-LARGE |
| DA3-LARGE | Depth Anything Team | depth-anything/DA3-LARGE |
| DA3-LARGE-1.1 | Depth Anything Team | depth-anything/DA3-LARGE-1.1 |
| DA3-BASE | Depth Anything Team | depth-anything/DA3-BASE |
| DA3-SMALL | Depth Anything Team | depth-anything/DA3-SMALL |
| DA3-GIANT | Depth Anything Team | depth-anything/DA3-GIANT |
| DA3-GIANT-1.1 | Depth Anything Team | depth-anything/DA3-GIANT-1.1 |
| DA3NESTED-GIANT-LARGE | Depth Anything Team | depth-anything/DA3NESTED-GIANT-LARGE |
| DA3NESTED-GIANT-LARGE-1.1 | Depth Anything Team | depth-anything/DA3NESTED-GIANT-LARGE-1.1 |
| Depth Anything v2 Large | Depth Anything Team | depth-anything/Depth-Anything-V2-Large-hf |
| Depth Anything v2 Base | Depth Anything Team | depth-anything/Depth-Anything-V2-Base-hf |
| Depth Anything v2 Small | Depth Anything Team | depth-anything/Depth-Anything-V2-Small-hf |
| Depth Anything v2 Metric Indoor (Large) | Depth Anything Team | depth-anything/Depth-Anything-V2-Metric-Indoor-Large-hf |
| Depth Anything v2 Metric Outdoor (Large) | Depth Anything Team | depth-anything/Depth-Anything-V2-Metric-Outdoor-Large-hf |
| Depth Anything v2 Giant (safetensors) | Depth Anything Team | Local weights (dav2:vitg_fp32) |
| Depth Anything v1 Large | LiheYoung | LiheYoung/depth-anything-large-hf |
| Depth Anything v1 Base | LiheYoung | LiheYoung/depth-anything-base-hf |
| Depth Anything v1 Small | LiheYoung | LiheYoung/depth-anything-small-hf |
| Prompt Depth Anything VITS Transparent | Depth Anything Team | depth-anything/prompt-depth-anything-vits-transparent-hf |
| LBM Depth | Jasper | jasperai/LBM_depth |
| DepthPro (Apple) | Apple | apple/DepthPro-hf |
| ZoeDepth (NYU+KITTI) | Intel | Intel/zoedepth-nyu-kitti |
| MiDaS 3.0 (DPT-Hybrid) | Intel | Intel/dpt-hybrid-midas |
| DPT Large (Intel) | Intel | Intel/dpt-large |
| DPT Large (Manojb) | Manojb | Manojb/dpt-large |
| DPT BEiT Large 512 | Intel | Intel/dpt-beit-large-512 |
| MiDaS v2 (Qualcomm) | Qualcomm | qualcomm/Midas-V2 |
This project utilizes the FFmpeg multimedia framework for video/audio processing via subprocess invocation. FFmpeg is licensed under the GNU GPL v3 or LGPL, depending on how it was built. No modifications were made to the FFmpeg source or binaries β the software simply executes FFmpeg as an external process.
You may obtain a copy of the FFmpeg license at: https://www.gnu.org/licenses/
VisionDepth3D calls FFmpeg strictly for encoding, muxing, audio extraction, and frame rendering operations, in accordance with license requirements.




