Looper
The Devastating Death Of Deadliest Catch's Todd Kochutin

Nccl python

Nccl python. Is there any command output i can check and validate ? MSCCL is an inter-accelerator communication framework that is built on top of NCCL and uses its building blocks to execute custom-written collective communication algorithms. 0. This option is ignored if the --python-sampling option is set to Jun 1, 2022 · I’m unclear on how to broadcast tensors using NCCL from the rank0 process to all other processes. Parameters: ndev – Total number of GPUs to be used. distributed. Nvidia NCCL2 Python bindings using ctypes and numba. Same problem when disabling IB $ NCCL_IB_DISABLE=1 python test_ddp. 0-devel-ubuntu22. The figure shows CuPy speedup over NumPy. NCCL API¶. 2_2. Provide details and share your research! But avoid …. Note: When profiling Python-only workflows, consider disabling the CPU sampling option to reduce overhead. If you want to stay with Windows you can try HierarchicalCopyAllReduce. utils. : export NCCL_MIN_NCHANNELS=32. Many codes and ideas of this project come from the project pyculib . If you need to use a particular CUDA version (say 12. However, there is a connection failure in the dist. nccl. Feb 11, 2022 · hi I’m using cuda 11. 22. In that sense, No. * Some content may require login to our free NVIDIA Developer Program. 75-py3-none-manylinux2014_x86_64. 6 , VS2017 15. dev5. I wonder if I remove 2. Apr 1, 2020 · python; tensorflow; Share. dist的环境。这里backend选择nccl来进行通讯,可以用dist. The NCCL backend is included in the pre-built binaries with CUDA support. launch command and everything worked. By default for Linux, the Gloo and NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA). py Aug 5, 2024 · Uploaded Aug 5, 2024 Python 3 Windows x86-64 nvidia_cudnn_cu11-9. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed NCCL Backend. Aug 17, 2020 · So I am on windows 10 and am using multiple GPUs now in order to run the training of some machine learning model and this model is about GAN algorithm you can check the full code over here : Here, This document describes the key features, software enhancements and improvements, and known issues for NCCL 2. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Apr 3, 2024 · NCCL (pronounced “Nickel”) is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, and reduce-scatter. Environment variables can also be set statically in /etc/nccl. Nov 8, 2023 · Here are the logs. It implements optimized all-reduce, all-gather, reduce, broadcast, reduce-scatter, all-to-all,as well as any send/receive based communication pattern. 1 and public PyTorch release versions by default. You switched accounts on another tab or window. --python-sampling-frequency. The NVIDIA Collective Communications Library (NCCL) (pronounced “Nickel”) is a library of multi-GPU collective communication primitives that are topology-aware and can be easily integrated into applications. Many deep learning frameworks have support libraries, written in C, to bridge between Python and NCCL. Increasing the number of channels can be beneficial to performance, but it also increases GPU utilization for collective operations. NCCL opens TCP ports to connect processes together and exchange connection information. export NCCL_SOCKET_IFNAME=eth0 export NCCL_P2P_DISABLE=1. The maximum supported frequency is 2KHz. py:608: UserWarning: Attempted to get default timeout for nccl backend, but NCCL support is not compiled warnings. amp. multiprocessing. version… also is there any way to find nccl 2. Collective communication primitives are common patterns of data transfer among a group of CUDA devices. 1 Describe the problem Op typ Sep 20, 2023 · How can we install nccl 2. DistributedDataParallel along with apex. backend – the CCL backend to use Sep 15, 2022 · raise RuntimeError("Distributed package doesn't have NCCL " "built in") RuntimeError: Distributed package doesn't have NCCL built in. PyTorch distributed package supports Linux (stable), MacOS (stable), and Windows (prototype). Tip: By default, you will have to use the command python3 to run Python. It is originally as part of the distributed deep learning project called necklace . py", line 68, in build torch. init_process_group(backend='nccl')初始化torch. It enables convenient multiprocess distributed training, optimized for NVIDIA's NCCL communication library. Aug 8, 2018 · System information Windows 10 x64 pro 17314. This example shows how to restrict NCCL ports to 50000-51000: Starting the Python training script using NCCL_IB_DISABLE=1 python tune. Donate today! Backends that come with PyTorch¶. create a clean conda environment: conda create -n pya100 python=3. Aug 27, 2024 · PyTorch is a GPU accelerated tensor computational framework with a Python front end. 0), you can use the cuda-version metapackage to select the version, e. py develop #运行测试文件,看看有没有报错 python test. See full list on github. $ # (Recommended) Create a new conda environment. The minimum supported frequency is 1Hz. 11 is generally installed by default on any of our supported Linux distributions, which meets our recommendation. 6. Jan 23, 2024 · @junrushao Thanks for your apply. Developers of deep learning frameworks can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. There is user interest in being able to initialize a NCCL clique and pass cupy arrays directly to the collective comms functions. Parameters: world_size – the total number of processes in the group. 11. NCCL is available for download as part of the NVIDIA HPC SDK and as a separate package for Ubuntu and Red Hat. $ conda create-n myenv python = 3. so) returned 2 : libnccl-net. Nov 12, 2020 · 🐛 Bug NCCL 2. It has been optimized to achieve high bandwidth on aliyun machines using PCIe, NVLink, NVswitch,as well as networking using InfiniBand Verbs, eRDMA or TCP/IP sockets. dist. 3. This NCCL Developer Guide is the reference document for developers who want to use NCCL in their C/C++ application or library. If provided, Thus NCCL backend is the recommended backend to use for GPU training. e. gz; Algorithm Hash digest; SHA256: 2542069184c554fe72d3c7d4f908c92dfa1a4a03abb42a00ec14b1ea87825377: Copy : MD5 See all the latest NVIDIA advances from GTC and other leading technology conferences—free. Asking for help, clarification, or responding to other answers. py as enabling toggles have slightly different usage comparing to disabling toggles. # for CPU Backend Only python setup. cmake and run Python script with environment variable TVM_LOG_DEBUG=1 python main. Example/Walkthrough. spawn は、第一引数に実行するの関数を指定し、argで関数に値を代入します。そして、nproc分のプロセスを並列実行します。この時、関数は f(i, *args) の形で呼び出されます。 Apr 7, 2021 · This solution is tested on a multi GPU A100 environment:. send is called). Jul 24, 2024 · Python First. py does not help. Apr 7, 2021 · python -c "import torch;print(torch. We compared NCCL 2. I guess Horovod is the most major one. 0 with AVX2 simd, CUDA9. py Running basic DDP example on rank 1. I check the InitCCL and there is no problem on line 165. Versions¶. 1 < integers < 2000. Apr 23, 2021 · Hashes for nvidia-nccl-0. Jun 18, 2024 · NCCL (pronounced “Nickel”) is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, and reduce-scatter. NVIDIA Optimized Frameworks such as Kaldi, NVIDIA Optimized Deep Learning Framework (powered by Apache MXNet), NVCaffe, PyTorch, and TensorFlow (which includes DLProf and TF-TRT) offer flexibility with designing and training custom (DNNs for machine learning and AI applications. Presumably I’m not using broadcast correctly 🙁 For example, I added a broadcasting snippet to this sample script from huggingface: Please note, the script works Apr 23, 2024 · D:\Caches\Conda\conda_envs\llama3\lib\site-packages\torch\distributed\distributed_c10d. Aug 21, 2024 · Additionally I have manually setup NCCL envs for the network interfaces I have from ipconfig on the host and to disable P2P if any (The NCCL_P2P_DISABLE variable disables the peer to peer (P2P) transport, which uses CUDA direct access between GPUs, using NVLink or PCI). py install # build with oneCCL from basekit export INTELONEAPIROOT= ${HOME} /intel/oneapi USE_SYSTEM_ONECCL=ON COMPUTE_BACKEND=dpcpp python setup. gz; Algorithm Hash digest; Developed and maintained by the Python community, for the Python community. 3 release makes NCCL fully open-source and available on GitHub. 2. Steps to reproduce the I am trying to send a PyTorch tensor from one machine to another with torch. 1 just nccl 2. Apr 5, 2023 · I am trying to finetune a ProtGPT-2 model using the following libraries and packages: I am running my scripts in a cluster with SLURM as workload manager and Lmod as environment modul systerm, I also have created a conda environment, installed all the dependencies that I need from Transformers HuggingFace. py install Sep 26, 2018 · The latest NCCL 2. The cluster also has multiple GPUs and CUDA v 11. whl (572. warn("Attempted to get default timeout for nccl backend, but NCCL support is not compiled") [W socket. 148, Cudnn7. The following examples demonstrate common use cases for NCCL initialization. cpp:697] [c10d] The client socket has nccl¶ Description¶. init_process_group(backend='nccl')来初始化NCCL通信。然后使用DistributedDataParallel将模型包装起来,并指定使用GPU进行训练。在训练过程中,数据、模型和梯度都经过NCCL通信进行传输和同步。 总结 We would like to show you a description here but the site won’t allow us. 9-y $ conda activate myenv $ # Install vLLM with CUDA 12. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels. version. send does not block the Python process, but rather inserts a synchronization barrier between the NCCL communication stream and the main computation stream (or the context of the stream where dist. Optimized primitives for collective multi-GPU communication. 19. CuPy is an open-source array library for GPU-accelerated computing with Python. 9. It is used naturally as you would use NumPy, SciPy and scikit-learn, or any other Python extension. Initially developed as an open-source research project, NCCL is lightweight, depending only on the usual C++ and CUDA libraries. init_process_group("nccl") This tells PyTorch to do the setup required for distributed training and utilize the backend called “nccl” (which is more recommended usually and I think it has more features, but seems to not be available for windows). You can even write the neural network layers in Python using libraries such as Cython and Numba. ipv4. 9 MB view hashes ) Uploaded Aug 5, 2024 Python 3 NCCL all-reduce implementation of CrossDeviceOps. . py, and the following new messages appeared: 🐛 Bug dist. Most operations perform well on a GPU using CuPy out of the box. conda install -c conda-forge cupy cuda-version=12. cupy. Docs » NVIDIA Collective Communication Library (NCCL) Documentation Jan 16, 2020 · To use system NCCL user should explicitly provide USE_SYSTEM_NCCL=1. py. PyTorch is not a Python binding into a monolithic C++ framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. 3 and NCCL 2. NCCL: Getting Started. 4, cuda-11. 7) Point-to-point communication can be used to express any communication pattern between ranks. 8 conda activate py38 conda install pytorch torchvision torchaudio cud Feb 20, 2024 · #删除原有nccl相关的 rm -r pytorch/build/nccl* #重新编译 MAX_JOBS = 32 USE_CUDA = 1 USE_NCCL = 1 USE_SYSTEM_NCCL = 0 USE_GLOO = 0 python setup. ray. Dec 19, 2019 · There are a lot of use-cases for having NCCL run directly in the Python layer. conf (for an administrator to set system-wide values) or in ~/. Mar 22, 2021 · PyTorch 是一个开源的深度学习框架,而 NCCL是 NVIDIA 提供的用于高性能 GPU 群集通信的库。通过编译 PyTorch 和 NCCL 的源代码,可以自定义构建并优化 PyTorch 在 GPU 群集上的性能。 首先,需要安装构建 PyTorch 和 NCCL 所需的依赖项,例如 CUDA、cuDNN、Python 和其他相关的 May 14, 2024 · DEEP-NCCL is an AI-Accelerator communication framework for NVIDIA-NCCL. The dist. broa Oct 12, 2023 · Getting there is your own personal spiritual journey with your computer. Anvil: cuda-11. Mar 6, 2024 · @ptrblck: how do i ensure that no CUDA and NCCL calls are there as this is Basic Vanilla code i have taken for MACOS as per recommendation. Jun 29, 2024 · 当 NCCL_IB_DISABLE=0 的时候,NCCL_IB_HCA 设置的值如果不是 rdma link 显示的 IB 设备,则 NCCL 会提示找不到 IB 设备,然后回落到 NET/Socket,识别到可用的网络设备,并且实际使用的是 ib0(见日志中的 [send] via NET/Socket/1 和 NCCL INFO Using 给出的设备列表). We would like to show you a description here but the site won’t allow us. You signed out in another tab or window. If you only use CUDA tensors for your collective operations, consider using this backend for the best in class performance. NCCL. PyTorch is a GPU accelerated tensor computational framework. Anyone familiar with MPI will thus find NCCL API very natural to use. But, if your workload warrants using less than 8 MI300 GPUs on a system, you can set the run-time variable NCCL_MIN_NCHANNELS to increase the number of channels. Jun 28, 2024 · Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers; Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand Mar 14, 2024 · I have the same issue. conf (for users). 3 version that shows in torch. 3 in my env? because apt search nccl didn’t show any 2. 1000. Any point-to-point communication needs two NCCL calls : a call to ncclSend() on one rank and a corresponding ncclRecv() on the other rank, with the same count and data type. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. As NLCC is not available on Oct 8, 2023 · AIACC-NCCL is an AI-Accelerator communication framework for NVIDIA-NCCL. Follow asked Apr 1, 2020 at 14:39. , using apt or yum) provided by NVIDIA. gpuで実行する場合はncclが速いとのことです。 torch. I followed this link by setting the following but still no luck. so: cannot open shared object file: No such file or directory MLVM: MLVM:6109:6109 [0] NCCL INFO NET/Plugin : No plugin Apr 25, 2024 · My current observation on single/multi-host CUDA environments using NCCL distributed backend is that when a timeout exception is raised at the C++ level (when TORCH_NCCL_ASYNC_ERROR_HANDLING=1), this exception propagates through a few try/catch blocks, but eventually is left unhandled, resulting in the Python processes terminating via SIGABRT Hashes for vllm_nccl_cu11-2. 3, then torch would set the default version as 2. 8 and cuda 12. To my knowledge they only work with Linux. The NVIDIA Collective Communications Library (NCCL, pronounced “Nickel”) is a library providing inter-GPU communication primitives that are topology-aware and can be easily integrated into applications. NCCL has an extensive set of environment variables to tune for specific usage. python -m torch. commId – The unique ID returned by get_unique_id(). g. I am still new to pytorch and couldnt really find a way of setting the backend to ‘gloo’. 3 don't exist for cuda 11. NCCL implements both collective communication and point-to-point send/receive primitives. 18. NcclCommunicator# class cupy. $ python test_ddp. NcclCommunicator (int ndev, tuple commId, int rank) # Initialize an NCCL communicator for one device controlled by one process. 3 Nov 17, 2023 · If you are using your conda binaries to compile PyTorch you could try to uninstall these and instead install a full CUDA toolkit, including the compiler, locally from here. It might be a bug in configuration process which is unable to set system NCCL properly without USE_SYSTEM_NCCL flag or misleading documentation in setup. This should provide you with the flexibility you need and enable us to have open discussions with the community as we continue to build a great product. 5 for cuda 12. 4_2. init_process_group function works properly. collective. CuPy utilizes CUDA Toolkit libraries including cuBLAS, cuRAND, cuSOLVER, cuSPARSE, cuFFT, cuDNN and NCCL to make full use of the GPU architecture. 365, i7 7900X R6E, TTxp (*4), 16G DDR4 3000@2666 (*6) Channel . Sep 15, 2022 · I am trying to use two gpus on my windows machine, but I keep getting raise RuntimeError("Distributed package doesn't have NCCL " "built in") RuntimeError: Distributed package doesn't have NCCL built in I am still new to pytorch and couldnt really find a way of setting the backend to ‘gloo’. 8 errors on PyTorch distributed process group creation To Reproduce Steps to reproduce the behavior: On two machines, execute this command with ranks 0 and 1 after setting the environment variables (MASTER_ADDR, MASTER_POR PYTHON_EXEC - System executable override. I have read that there might be a NCCL driver equivalent for Windows but have not been able to find them myself. However, when I run my script to Point-to-point communication¶ (Since NCCL 2. 0_2. E. The Imagenet example shows use of apex. Example 1: Single Process, Single Thread, Multiple Devices ¶ In the specific case of a single process, ncclCommInitAll can be used. init_process_group('nccl') hangs on some version of pytorch+python+cuda version To Reproduce Steps to reproduce the behavior: conda create -n py38 python=3. PyTorch is designed to be deeply integrated with Python. Python 3. 95<0> MLVM: MLVM:6109:6109 [0] NCCL INFO NET/Plugin : Plugin load (libnccl-net. Feb 7, 2020 · NCCL drivers do not work with Windows. 04 as builder RUN apt-get update && apt-get install -y --no-install-recommends git curl gnupg RUN Jun 27, 2024 · dist. The NCCL backend provides an optimized implementation of collective operations against CUDA tensors. rank – the rank of the current process. I spent many hours on the StackOverflow and the PyTorch Forum but no one mentioned this solution, so I'm sharing it to save people time. 0+cu111 Is debug build: False NCCL is a library of multi-GPU collective communication primitives that are topology-aware and easily integrated into your application. 4, py3. Use NCCL collective communication primitives to perform data communication. 3 and if I run multi-gpus it freezes so I thought it would be solved if I change pytorch. It explains how to use NCCL for inter-GPU communication, details the communication semantics as well as the API. 1. I’ve tried a few approaches, but each attempt freezes the process and puts the GPUs @ 100% utilization (checked via nvidia-smi). Oct 24, 2021 · I only needed to switch to the python -m torch. Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Jarrod Christman One solution from issue 21470 is to build nccl for Winx64. Sep 16, 2023 · File "D:\shahzaib\codellama\llama\generation. 16. init_collective_group (world_size: int, rank: int, backend = 'nccl', group_name: str = 'default') [source] # Initialize a collective group inside an actor process. NCCL bus bandwidth on up to 24,576 GPUs Effect on DL training. 7. 4, as well as the 2D hierarchical rings using NCCL 2. nvcc: NVIDIA (R) Cuda compiler However, NCCL is for NVIDIA GPUs, so you need to allocate GPU device memory & pass memory pointers to NCCL. I can give you a few X's on the map, and definitely say, proceed with caution and at your own risk. $ make CUDA_HOME=/path/to/cuda NCCL_HOME=/path/to/nccl NCCL tests rely on MPI to work on multiple processes, hence multiple nodes. You can familiarize yourself with the NCCL API documentation to maximize your usage performance. ip_local_port_range property of the Linux kernel. The MSCCL Python package ships with a registry of synthesis strategies and hand optimized algorithms. Tensorflow 1. This is the image I used: FROM nvidia/cuda:11. 8-3. As discussed in the related question Pytorch "NCCL error": unhandled system error, NCCL version 2. Running basic DDP example on rank 0. I enable debug information display by set(USE_RELAY_DEBUG ON) in tvm/build/config. Reload to refresh your session. Apr 27, 2024 · You signed in with another tab or window. 9 then check your nvcc version by: nvcc --version #mine return 11. 8", unhandled cuda error, NCCL version means something is wrong on the NCCL side. Any way to set backend= 'gloo' to run two gpus on windows. The main goal of this project is to use Nvidia NCCL with only python code and without any other compiled language code like C++. Enabled debug messages using NCCL_DEBUG="INFO" NCCL_IB_DISABLE=1 python tune. py install # for XPU Backend: use DPC++ Compiler to enable support for Intel XPU # build with oneCCL from third party COMPUTE_BACKEND=dpcpp python setup. 10. Installing cuDNN and NCCL# We recommend installing cuDNN and NCCL using binary packages (i. 8. These libraries have a stated goal of improving usability Nov 9, 2023 · Deep-NCCL is an AI-Accelerator communication framework for NVIDIA-NCCL. dll library for multi-gpu communication during multi-gpu training. You signed in with another tab or window. Specify the Python sampling frequency. tar. Heterogeneous Run Time version of TensorFlow. parallel. These can be loaded into the runtime through the msccl. Aug 13, 2021 · Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. I conducted some experiments to compare NCCL P2P APIs to memcpy and confirm the above list. 8 * Visual Studio 2022 & CUDA 11. Python Source. The pre-built and tested binaries (debs, rpms, tgz) will continue to be available on Developer Zone . API Documentation. Similarly, if NCCL is not installed in /usr, you may specify NCCL_HOME. The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. is_nccl_avaliable()来查看是否可用nccl。除此之外也可以在这里设置一些其他的环境参数。 torch. If you want to use just the command python, instead of python3, you can symlink python to the python3 binary. set_device(local_rank)设置环境CUDA序号; 数据集设置: Jan 8, 2024 · Side question: when does this file get used? Is it only used during release binary generation/testing? * Add nccl version print for cuda related smoke test (pytorch#1667) * Apply nccl test to linux only (pytorch#1669) * Build nccl after installing cuda (pytorch#1670) Fix: pytorch/pytorch#116977 Nccl 2. Apr 15, 2024 · You signed in with another tab or window. * Visual Studio 2022 & CUDA 11. Feb 4, 2019 · Figure 4. pynccl. It is built to be deeply integrated into Python. 1? There is no link to the nccl 2. It looks like the current wrapper requires the user to work at the level of ctypes pointers rather than native Python objects. Many codes and ideas of this project come from the project pyculib. To Reproduce. com Leading deep learning frameworks such as Caffe2, Chainer, MxNet, PyTorch and TensorFlow have integrated NCCL to accelerate deep learning training on multi-GPU multi-node systems. Not different from other logs. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. 7(with cmake), CP/SM6. Figure 5 shows performance improvement on DL training is significant, and increases as we scale to larger numbers of GPUs. If you need a slim installation (without also getting CUDA dependencies installed), you can do conda install -c conda-forge cupy-core. May 3, 2023 · I am consistently seeing a crash when running the nccl-tests with 16 GPUs. Our 这段代码使用了Pytorch的分布式训练功能和NCCL库来实现多GPU训练。通过dist. 4 Jan 7, 2022 · PyNaCl is a Python binding to libsodium, which is a fork of the Networking and Cryptography library. Mar 31, 2021 · However, when the backend is set to "nccl", the script gets stuck with the below output and never returns to the bash prompt. collect_env output : Collecting environment information… PyTorch version: 1. You need to set an environment variable NCCL_DEBUG=INFO to ask NCCL to print out its log so you can figure out what is exactly the problem. 4. NCCL closely follows the popular collectives API defined by MPI (Message Passing Interface). MLVM: > Rank_0 done loading fused kernels! MLVM: MLVM:6109:6109 [0] NCCL INFO Bootstrap : Using ibP257s474637:172. In bare Python programs, this is not easy. util. $ pip install vllm Note As of now, vLLM’s binaries are compiled with CUDA 12. init function, which must be called before the application creates its NCCL communicator. The following sections describe the NCCL methods and operations. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed Jun 18, 2024 · NCCL uses a simple C API, which can be easily accessed from a variety of programming languages. To restrict the range of ports used by NCCL, one can set the net. 1 Python. If you want to install tar-gz version of cuDNN and NCCL, we recommend installing it under the CUDA_PATH directory. 7 MyCaffe uses the nccl64_134. It has been optimized to achieve high bandwidth on any platform using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. cuda. version())" Check it this link Command Cheatsheet: Checking Versions of Installed Software / Libraries / Tools for Deep Learning on Ubuntu For containers, where no locate is available sometimes, one might replace it with ldconfig -v : Environment Variables¶. vbfsq nyjida vmmvaqn tsqjxy wugvhhy hgvrjo txwc gxki wifd vzeein