Commit 55593ef7 authored by emanueledalsasso's avatar emanueledalsasso
Browse files

example on two Airbus Sample images

parent 44c1511b
%% Cell type:markdown id: tags:
**[Download the notebook](https://gitlab.telecom-paris.fr/ring/MERLIN/-/raw/master/MERLIN-TSX-HS-spotlight-test.ipynb?inline=false) and then import it under Google Colab**
<a href="https://colab.research.google.com/" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
%% Cell type:markdown id: tags:
# As if by magic: self-supervised training of despeckling networks with MERLIN
## Emanuele Dalsasso, Loïc Denis, Florence Tupin
Please note that the training set is only composed of **TerraSAR-X** SAR images **acquired in High Resolution SPOTLIGHT (HS) mode**, thus this testing code is specific to this data.
%% Cell type:markdown id: tags:
## 0.1. Enable GPU and save copy on Drive to enable editing
Runtime -> Change runtime type -> Hardware accelerator: GPU
File -> Save a copy in Drive
%% Cell type:markdown id: tags:
## 0.2. Fetch image data and upload it on Colab
Sample images can be found at https://www.intelligence-airbusds.com/imagery/sample-imagery/
To get High Resolution SpotLight data, select `Radar Imagery and Data --> Radar Imagery --> TerraSAR-X High-Resolution SpotLight` and then click on `Search` button. Download a Single-Look Complex (SLC) image data, such as *Australia, Uluru - InSAR 1*. You have to fill a form with your data and will get the downloadable link on your inbox. Then, upload your image on Google Colab or alternatively read it locally using the ```cos2mat``` function and upload the image crop of your interest on `/content/MERLIN-TSX-spotlight-test/test_data` after having downloaded network weights as shown below.
The corresponding denoised image can be found on the supporting document on Gitlab. A video tutorial showing how the results were obtained is available at: https://youtu.be/RCXm07jSUEA
A result on a crop of size $1024\times 1024$ is available in the *Additional_results* folder of the Git repository.
%% Cell type:markdown id: tags:
## 1. Download network weights and code
%% Cell type:code id: tags:
``` python
!wget https://gitlab.telecom-paris.fr/ring/MERLIN/-/raw/master/load_cosar.py
!wget https://gitlab.telecom-paris.fr/ring/MERLIN/-/raw/master/network_weights/MERLIN-TSX-spotlight-test.zip
!unzip /content/MERLIN-TSX-spotlight-test.zip
```
%% Cell type:code id: tags:
``` python
import numpy as np
from load_cosar import cos2mat
image_data = cos2mat('') # fill this line to read the uploaded image
np.save('/content/MERLIN-TSX-spotlight-test/test_data/test_image_data.npy',image_data)
```
%% Cell type:markdown id: tags:
## 2. Install compatible version of tensorflow
%% Cell type:code id: tags:
``` python
!pip uninstall -y tensorflow
```
%% Cell type:code id: tags:
``` python
!pip install tensorflow-gpu==1.13.1
```
%% Cell type:markdown id: tags:
## 3. Test on real data
**TerraSAR-X High Resolution SpotLight** images in **Single-Look Complex (SLC)** are to be stored in the folder _/content/MERLIN-TSX-spotlight-test/test_data/_
To test on custom data, upload your SLC images in a numpy array with shape [ydim, xdim, 2] (where [:,:,0] contains the **real part** and [:,:,1] contains the **imaginary part**) in the folder _/content/MERLIN-TSX-spotlight-test/test_data/_
Results are stored in _/content/test_. For each image data, the following files are produced in output:
- the imaginary part $a$
- the real part $b$
- the noisy image in amplitude format: $A=\sqrt{a^2+b^2}$, where $a$ and $b$ are the real and imaginary part of the single-look complex data, respectively
- the squared root $\sqrt{\hat{R}_a}$ of the reflectivity estimated from the real part: $f_{CNN}(a)=\hat{R}_a$
- the squared root $\sqrt{\hat{R}_b}$ of the reflectivity estimated from the imaginary part: $f_{CNN}(b)=\hat{R}_b$
- the denoised image in amplitude format, obtained by averaging the two intermediate estimations: $\sqrt{\hat{R}}=\sqrt{\frac{\hat{R}_a+\hat{R}_b}{2}}$
For each image data, the corresponding _png_ file is generated as follows. A threshold $t$ is estimated (or pre-estimated) on the noisy image: $t = \mu_A+3\sigma_A$, with $\mu_A$ the mean of $A$ and $\sigma_A$ its standard deviation. This threshold is applied to each image to reduce SAR images long tail. The thresholded dynamic is then shrinked between 0 and 255 for visualization purposes. To produce the _png_ file of the real and imaginary part, $a\sqrt{2}$ and $b\sqrt{2}$ are plotted.
At each time a test is run, clean the _/content/test_ directory otherwise the results will be overwritten.
%% Cell type:code id: tags:
``` python
!python /content/MERLIN-TSX-spotlight-test/main.py
```
%% Cell type:markdown id: tags:
When image dimension exeeds 256, the U-Net is scanned over the image with a default stride of 64 pixels. To change it to a custom value, run the cell below (here it is set to 32, allowing more quality at the cost of a greater runtime)
%% Cell type:code id: tags:
``` python
!python /content/MERLIN-TSX-spotlight-test/main.py --stride_size=32
```
......
%% Cell type:markdown id: tags:
**[Download the notebook](https://gitlab.telecom-paris.fr/RING/MERLIN/-/raw/master/MERLIN-TSX-stripmap-test.ipynb?inline=false) and then import it under Google Colab**
<a href="https://colab.research.google.com/" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
%% Cell type:markdown id: tags:
# As if by magic: self-supervised training of despeckling networks with MERLIN
## Emanuele Dalsasso, Loïc Denis, Florence Tupin
Please note that the training set is only composed of **TerraSAR-X** SAR images **acquired in STRIPMAP mode**, thus this testing code is specific to this data.
%% Cell type:markdown id: tags:
## 0.1. Enable GPU and save copy on Drive to enable editing
Runtime -> Change runtime type -> Hardware accelerator: GPU
File -> Save a copy in Drive
%% Cell type:markdown id: tags:
## 0.2. Fetch image data and upload it on Colab
Sample images can be found at https://www.intelligence-airbusds.com/imagery/sample-imagery/
To get High Resolution SpotLight data, select `Radar Imagery and Data --> Radar Imagery --> TerraSAR-X Stripmap` and then click on `Search` button. Download a Single-Look Complex (SLC) image data, such as *Australia, Uluru - InSAR 1*. You have to fill a form with your data and will get the downloadable link on your inbox. Then, upload your image on Google Colab or alternatively read it locally using the ```cos2mat``` function and upload the image crop of your interest on `/content/MERLIN-TSX-stripmap-test/test_data` after having downloaded network weights as shown below.
The corresponding denoised image can be found on the supporting document on Gitlab. A video tutorial showing how the results were obtained is available at: https://youtu.be/RCXm07jSUEA
A result on a crop of size $1024\times 1024$ is available in the *Additional_results* folder of the Git repository.
%% Cell type:markdown id: tags:
## 1. Download network weights and code
%% Cell type:code id: tags:
``` python
!wget https://gitlab.telecom-paris.fr/ring/MERLIN/-/raw/master/load_cosar.py
!wget https://gitlab.telecom-paris.fr/ring/MERLIN/-/raw/master/network_weights/MERLIN-TSX-spotlight-test.zip
!unzip /content/MERLIN-TSX-spotlight-test.zip
```
%% Cell type:code id: tags:
``` python
import numpy as np
from load_cosar import cos2mat
image_data = cos2mat('') # fill this line to read the uploaded image
np.save('/content/MERLIN-TSX-stripmap-test/test_data/test_image_data.npy',image_data)
```
%% Cell type:markdown id: tags:
## 2. Install compatible version of tensorflow
%% Cell type:code id: tags:
``` python
!pip uninstall -y tensorflow
```
%% Cell type:code id: tags:
``` python
!pip install tensorflow-gpu==1.13.1
```
%% Cell type:markdown id: tags:
## 3. Test on real data
**TerraSAR-X Stripmap** images in **Single-Look Complex (SLC)** are to be stored in the folder _/content/MERLIN-TSX-stripmap-test/test_data/_
To test on custom data, upload your SLC images in a numpy array with shape [ydim, xdim, 2] (where [:,:,0] contains the **real part** and [:,:,1] contains the **imaginary part**) in the folder _/content/MERLIN-TSX-stripmap-test/test_data/_
Results are stored in _/content/test_. For each image data, the following files are produced in output:
- the imaginary part $a$
- the real part $b$
- the noisy image in amplitude format: $A=\sqrt{a^2+b^2}$, where $a$ and $b$ are the real and imaginary part of the single-look complex data, respectively
- the squared root $\sqrt{\hat{R}_a}$ of the reflectivity estimated from the real part: $f_{CNN}(a)=\hat{R}_a$
- the squared root $\sqrt{\hat{R}_b}$ of the reflectivity estimated from the imaginary part: $f_{CNN}(b)=\hat{R}_b$
- the denoised image in amplitude format, obtained by averaging the two intermediate estimations: $\sqrt{\hat{R}}=\sqrt{\frac{\hat{R}_a+\hat{R}_b}{2}}$
For each image data, the corresponding _png_ file is generated as follows. A threshold $t$ is estimated (or pre-estimated) on the noisy image: $t = \mu_A+3\sigma_A$, with $\mu_A$ the mean of $A$ and $\sigma_A$ its standard deviation. This threshold is applied to each image to reduce SAR images long tail. The thresholded dynamic is then shrinked between 0 and 255 for visualization purposes. To produce the _png_ file of the real and imaginary part, $a\sqrt{2}$ and $b\sqrt{2}$ are plotted.
At each time a test is run, clean the _/content/test_ directory otherwise the results will be overwritten.
%% Cell type:code id: tags:
``` python
!python /content/MERLIN-TSX-stripmap-test/main.py
```
%% Cell type:markdown id: tags:
When image dimension exeeds 256, the U-Net is scanned over the image with a default stride of 64 pixels. To change it to a custom value, run the cell below (here it is set to 32, allowing more quality at the cost of a greater runtime)
%% Cell type:code id: tags:
``` python
!python /content/MERLIN-TSX-stripmap-test/main.py --stride_size=32
```
......
**A video tutorial with instruction to download sample
data from the [Airbus webpage](https://www.intelligence-airbusds.com/imagery/sample-imagery/) and
put it in the proper input format required by the framework is available at https://youtu.be/RCXm07jSUEA**
put it in the proper input format required by the framework is available at https://youtu.be/RCXm07jSUEA . Results on two
images are provided in the *Additional_results* folder**
# As if by magic: self-supervised training of deep despeckling networks with MERLIN
## Emanuele Dalsasso, Loïc Denis, Florence Tupin
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment