
UniPAD: A Universal Pre-training Paradigm for Autonomous Driving Paper Review
Paper review
PETR: Position Embedding Transformation for Multi-View 3D Object Detection
Problem: DETR3D, and DETR struggle with coordinate prediction and feature sampling complexity.(a) In DETR, the object queries interact with 2D features to perform 2D detection. (b) DETR3D repeatedly projects the generated 3D reference points into image plane and samples the 2D features to interact with object queries in decoder. (c) PETR generates the 3D position-aware features by encoding the 3D position embed-ding (3D PE) into 2D image features. The object queries directly interact with 3D po.

UniPAD: A Universal Pre-training Paradigm for Autonomous Driving Paper Review
Paper review
PETR: Position Embedding Transformation for Multi-View 3D Object Detection
Problem: DETR3D, and DETR struggle with coordinate prediction and feature sampling complexity.(a) In DETR, the object queries interact with 2D features to perform 2D detection. (b) DETR3D repeatedly projects the generated 3D reference points into image plane and samples the 2D features to interact with object queries in decoder. (c) PETR generates the 3D position-aware features by encoding the 3D position embed-ding (3D PE) into 2D image features. The object queries directly interact with 3D po.

Subscribe to Perception

Subscribe to Perception
<100 subscribers
<100 subscribers
Share Dialog
Share Dialog
Docker is an open-source platform that automates the deployment, scaling, and management of applications. It uses containerization technology to bundle applications and their dependencies into a standardized unit for software development. This ensures that the application will run the same, regardless of the environment it's in, thus solving the issue of "it works on my machine" problem. Docker is widely used in deep learning to create consistent, reproducible environments which can be very important when working with complex libraries such as PyTorch.
To begin the installation of Docker on your Ubuntu system, first execute the following two commands: update your Ubuntu repositories using `sudo apt update`, then install Docker with `sudo apt install docker.io -y`.
sudo apt update
sudo apt install docker.io -yRun the following command to show the interfaces on your computer.
ip address showTo list the networks on the current docker, run the command:
sudo docker network lsYou can launch containers on Docker's default network using the following command line.
sudo docker run -itd --rm --name [container name] [image]`-itd` allows the Docker container to run interactively in detached mode, meaning it continues running in the background,
'-d' instead of '-itd' if you don't want the interactive mode but only to run it in the background.
`--rm` instructs Docker to automatically remove the container once it's no longer in use.
`[image]` represents the specific image that you want your Docker container to utilize.
You can use the following commands to see the virtual ethernet interfaces created and names together with the links, respectivelly.
ip address show
bridge linkThe `sudo docker inspect` command provides detailed information about a Docker object. This could be a container, image, volume, network, or node. The command returns information in JSON format, and it can be used for troubleshooting or scripting purposes. In the context of the PyTorch deep learning setup mentioned in the post content, you could use this command to examine the state and configurations of your Docker containers or images to ensure they are set up correctly for your specific needs.
For example, to inspect the bridge (the network type that docker uses)
To open an interactive shell and execute a process inside a running Docker container, use:
sudo docker exec -it [container name] shTo create a custom or user defined bridge other than the default bridge, Use the following command
sudo docker network create [network name]Now use the custom network to create a new container as follows:
sudo docker run -itd --rm --network [network name] --name [container name] [image]Note that the user defined network type (bridge) and the default can't talk each other or simply are isolated. One advantage of user defined network type is that you can ping to the other container in the same network by name. i.e., if you have loki and odin containers with the same user defined network, you can use:
sudo docker exec -it loki sh
# ping odinwhere odin is a network. where as you have to specify the ip address not the DNS when using the default bridge.
Run the following command to view the running processes or containers (first command) and all including the stopped or exitted processes (second command). 'ps' refers process status. i.e., -a (--all), -l(--latest), -f(--filter [e.g., name, status, image]), --format: customize the output format (e.g., JSON, table with specific columns).
sudo docker ps
sudo docker ps -aStop the running container or process with the container name or id
sudo docker stop [container name/container id]Step 1: Create a Dockerfile. You can take a base image and add your own instructions into the Dockerfile.
First, start with creating a directory, let's create a directory and name it dockertmp.
mkdir dockertmp
cd dockertmpCreate a Dockerfile.
touch DockerfileNow open the 'Dockerfile' file in a text editor to add some instructions used to build an image. The COPY command copies a file to the appropriate directory in the image.
In a Dockerfile, the FROM base_image instruction is the foundation for building your image. It specifies the base image that your new image will be built upon. Here's a breakdown of its usage:
Functionality:
The FROM instruction tells Docker which existing image to use as the starting point for your new image.
FROM [base image]
COPY [file on dockertmp directory] /tmp/
RUN apt update -y && \
apt autoremove && apt autoclean The commands used to create a docker image are
FROM [base image]
COPY [file or directory you want to copy] /project
# create a '/project' folder in the base image which contains all your codes/files
WORKDIR /project # Create a working directory called 'project' on the container, as it is isolated from the local machine
RUN pip install -r requirements.txt # intall the libraries and dependencies required
EXPOSE $PORT
CMDStep 2: Start docker and use the Dockerfile to build docker image.
Check if the docker is running first using the command
docker infoIf the docker is not running, run it using
sudo service docker startBuild the docker image and name it [image name], e.g. "docker build -t semeregt_dl ." -t stands for tag or labeling the image name
docker build -t [image name] .Step 3: Create a container from the docker image and specify the port to map port from your local machine, e.g. 8080, to port inside the container, e.g., 80.
docker run -p 8080:80 -itd --name [container name] [image]Now you can search 'https://[host ip address]:8080' on your browser to see what is going on.
A container has its own operating system, packages, and all the scripts. It is sort of disconnected from the local system or local computer. The isolated nature of a container is both its strength and challenge. It's like having a fully equipped computer within your existing one, with its own set of dependencies and libraries, all running in sync to perform the tasks you desire. This isolation ensures that whatever you do inside the container doesn't interfere with your main system, providing an extra layer of safety and consistency in your development process. Especially in the realm of deep learning, where complex libraries like PyTorch are used, Docker lets you experiment freely without worrying about environment inconsistencies or conflicts. However, it also means you need to manage data transfer carefully between your main system and the container, as they don’t automatically share resources unless explicitly told to do so.
Consists of a set of instructions on how to build the docker image. The OS, the Python version, the extra packages are stated in this 'Dockerfile' file.
Docker image is built from the Dockerfile. The .ISO image file is created in this step from the Dockerfile.
The docker container is when the docker image is run. In other words, it is an instance of a docker image.
Portability
Ease of deployment
Build docker image
docker build -t aladdin-image .Run the container
docker run -d \
--gpus all \
-v "${PWD}:/code" \ # to mount volume from local to container directory, should be absolute path (start from /home/ or /root/
-p 8080:8080 \
--name "aladdin-container" \
--env AUTHENTICATE_VIA_JUPYTER="mytoken" \ # this is for jupyter you can leave it
aladdin-image \
tail -f /dev/null # this is to keep the container runningInteract with the docker container
docker exec -it aladdin-container /bin/bashAttach the volume mount of the container to the local machine. This helps to sync the container to the local machine. Run this code from the local machine command.
tmux attach -t dockerdocker compose up # run container using docker compose
docker compose down # stop
docker container prune # remove all the stopped containersRename the name of an image
docker image tag [image name]:latest [new name]:version or # run docker images to see the name and if the tag is latest
docker image -t [image name]:latest [new name]:version # version is a tag, can be 1.0, 2.0, 1.1 or something elseUpload the image into docker hub by
docker push [image name]:version # check about the tag/version from the above codeTo remove the local instance of an image, run
docker container prune
docker rmi [image name]:version # version is the tag, rmi is remove imagesudo systemctl daemon-reload
sudo systemctl restart dockerUse the following website to make the local gpu accessible by the container. i.e., installing nvidia-docker runtime. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Docker is an open-source platform that automates the deployment, scaling, and management of applications. It uses containerization technology to bundle applications and their dependencies into a standardized unit for software development. This ensures that the application will run the same, regardless of the environment it's in, thus solving the issue of "it works on my machine" problem. Docker is widely used in deep learning to create consistent, reproducible environments which can be very important when working with complex libraries such as PyTorch.
To begin the installation of Docker on your Ubuntu system, first execute the following two commands: update your Ubuntu repositories using `sudo apt update`, then install Docker with `sudo apt install docker.io -y`.
sudo apt update
sudo apt install docker.io -yRun the following command to show the interfaces on your computer.
ip address showTo list the networks on the current docker, run the command:
sudo docker network lsYou can launch containers on Docker's default network using the following command line.
sudo docker run -itd --rm --name [container name] [image]`-itd` allows the Docker container to run interactively in detached mode, meaning it continues running in the background,
'-d' instead of '-itd' if you don't want the interactive mode but only to run it in the background.
`--rm` instructs Docker to automatically remove the container once it's no longer in use.
`[image]` represents the specific image that you want your Docker container to utilize.
You can use the following commands to see the virtual ethernet interfaces created and names together with the links, respectivelly.
ip address show
bridge linkThe `sudo docker inspect` command provides detailed information about a Docker object. This could be a container, image, volume, network, or node. The command returns information in JSON format, and it can be used for troubleshooting or scripting purposes. In the context of the PyTorch deep learning setup mentioned in the post content, you could use this command to examine the state and configurations of your Docker containers or images to ensure they are set up correctly for your specific needs.
For example, to inspect the bridge (the network type that docker uses)
To open an interactive shell and execute a process inside a running Docker container, use:
sudo docker exec -it [container name] shTo create a custom or user defined bridge other than the default bridge, Use the following command
sudo docker network create [network name]Now use the custom network to create a new container as follows:
sudo docker run -itd --rm --network [network name] --name [container name] [image]Note that the user defined network type (bridge) and the default can't talk each other or simply are isolated. One advantage of user defined network type is that you can ping to the other container in the same network by name. i.e., if you have loki and odin containers with the same user defined network, you can use:
sudo docker exec -it loki sh
# ping odinwhere odin is a network. where as you have to specify the ip address not the DNS when using the default bridge.
Run the following command to view the running processes or containers (first command) and all including the stopped or exitted processes (second command). 'ps' refers process status. i.e., -a (--all), -l(--latest), -f(--filter [e.g., name, status, image]), --format: customize the output format (e.g., JSON, table with specific columns).
sudo docker ps
sudo docker ps -aStop the running container or process with the container name or id
sudo docker stop [container name/container id]Step 1: Create a Dockerfile. You can take a base image and add your own instructions into the Dockerfile.
First, start with creating a directory, let's create a directory and name it dockertmp.
mkdir dockertmp
cd dockertmpCreate a Dockerfile.
touch DockerfileNow open the 'Dockerfile' file in a text editor to add some instructions used to build an image. The COPY command copies a file to the appropriate directory in the image.
In a Dockerfile, the FROM base_image instruction is the foundation for building your image. It specifies the base image that your new image will be built upon. Here's a breakdown of its usage:
Functionality:
The FROM instruction tells Docker which existing image to use as the starting point for your new image.
FROM [base image]
COPY [file on dockertmp directory] /tmp/
RUN apt update -y && \
apt autoremove && apt autoclean The commands used to create a docker image are
FROM [base image]
COPY [file or directory you want to copy] /project
# create a '/project' folder in the base image which contains all your codes/files
WORKDIR /project # Create a working directory called 'project' on the container, as it is isolated from the local machine
RUN pip install -r requirements.txt # intall the libraries and dependencies required
EXPOSE $PORT
CMDStep 2: Start docker and use the Dockerfile to build docker image.
Check if the docker is running first using the command
docker infoIf the docker is not running, run it using
sudo service docker startBuild the docker image and name it [image name], e.g. "docker build -t semeregt_dl ." -t stands for tag or labeling the image name
docker build -t [image name] .Step 3: Create a container from the docker image and specify the port to map port from your local machine, e.g. 8080, to port inside the container, e.g., 80.
docker run -p 8080:80 -itd --name [container name] [image]Now you can search 'https://[host ip address]:8080' on your browser to see what is going on.
A container has its own operating system, packages, and all the scripts. It is sort of disconnected from the local system or local computer. The isolated nature of a container is both its strength and challenge. It's like having a fully equipped computer within your existing one, with its own set of dependencies and libraries, all running in sync to perform the tasks you desire. This isolation ensures that whatever you do inside the container doesn't interfere with your main system, providing an extra layer of safety and consistency in your development process. Especially in the realm of deep learning, where complex libraries like PyTorch are used, Docker lets you experiment freely without worrying about environment inconsistencies or conflicts. However, it also means you need to manage data transfer carefully between your main system and the container, as they don’t automatically share resources unless explicitly told to do so.
Consists of a set of instructions on how to build the docker image. The OS, the Python version, the extra packages are stated in this 'Dockerfile' file.
Docker image is built from the Dockerfile. The .ISO image file is created in this step from the Dockerfile.
The docker container is when the docker image is run. In other words, it is an instance of a docker image.
Portability
Ease of deployment
Build docker image
docker build -t aladdin-image .Run the container
docker run -d \
--gpus all \
-v "${PWD}:/code" \ # to mount volume from local to container directory, should be absolute path (start from /home/ or /root/
-p 8080:8080 \
--name "aladdin-container" \
--env AUTHENTICATE_VIA_JUPYTER="mytoken" \ # this is for jupyter you can leave it
aladdin-image \
tail -f /dev/null # this is to keep the container runningInteract with the docker container
docker exec -it aladdin-container /bin/bashAttach the volume mount of the container to the local machine. This helps to sync the container to the local machine. Run this code from the local machine command.
tmux attach -t dockerdocker compose up # run container using docker compose
docker compose down # stop
docker container prune # remove all the stopped containersRename the name of an image
docker image tag [image name]:latest [new name]:version or # run docker images to see the name and if the tag is latest
docker image -t [image name]:latest [new name]:version # version is a tag, can be 1.0, 2.0, 1.1 or something elseUpload the image into docker hub by
docker push [image name]:version # check about the tag/version from the above codeTo remove the local instance of an image, run
docker container prune
docker rmi [image name]:version # version is the tag, rmi is remove imagesudo systemctl daemon-reload
sudo systemctl restart dockerUse the following website to make the local gpu accessible by the container. i.e., installing nvidia-docker runtime. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Semere Gerezgiher Tesfay
Semere Gerezgiher Tesfay
No activity yet