CVE-2022-26612 UnTar uses unTarUsingJava or the built-in tar on Windows to create a symlink under the expected extraction directory which points to an external directory.
unTar now validates the target directory path when unpackEntry creates a TAR entry and unpackEntry now validates the target directory path when unpackEntry creates a TAR entry and does not allow symlink targets. This is a change in the unTar function and not in tar entry. As a result, unTar will fail if it detects a TAR entry with a symlink target. Using unTar on Windows This can be addressed by using unTar using the unTarUsingJava function. Pass the --java_options flag to specify the Java options. untar using java options --------------------------- unTar using java options --------------------------- unTar using java options on Windows are as follows: -Djava.io.tmpdir=C:\\Some\\Temp\\ -Djavac.ext.class.path=C:\\Some\\Temp\\lib\\ext\\src\\java\\lib\\ext\\src -Djavac.src.class.path=C:\\Some\\Temp\\lib\\src\\java This will set the temporary directory to C:\Some\Temp and the JDK source path to C:\Some\Temp\lib\ext\src\java\lib\ext\src and the Hadoop source path to C:\Some\Temp\lib\src\java so that the unTar function can access the JDK and Hadoop source.
Install the prerequisites
To install the prerequisites on Fedora, open a terminal and type the following:
sudo dnf install tar bzip2 unixODBC-devel java-1.8.0-openjdk-devel
On Ubuntu/Debian, open a terminal and type the following:
sudo apt-get install tar bzip2 libbz2-dev ia32-libs unixODBC-dev ia32-libs libstdc++5 libssl-dev make ia32-libs libncurses5 libglu1 libx11-dev ia32 xorg g++ pkgconfig
Installing unTar on Windows
To install unTar on Windows, use the following steps:
Step 1: Download and extract the unTar binary from https://github.com/tar-project/tar
Step 2: Install Java, the JDK binaries, and Hadoop on your computer
Step 3: Create a batch file with the following content to run unTar and not generate any warnings
Running Instance in a Map Reduce Job
This blog post explains how to run the Hadoop instance on a Map Reduce job.
The idea is to run the Hadoop instance in a Map Reduce job and pass a list of files that need to be un-tarred as command line arguments.
Timeline
Published on: 04/07/2022 19:15:00 UTC
Last modified on: 05/19/2022 20:15:00 UTC