If you have a file in URL and you want to download it from the Linux terminal, you can use the
wget http client which is very easy to use.
Wget is basically a command line web browser without graphical presentation - it just downloads the content, be it even HTML, PDF or JPG, and saves it to file.
Let’s assume that your resource is
https://fullstack-tutorials.com/linux/linux-check-free-disk-space. In your terminal, type:
$ wget https://fullstack-tutorials.com/linux/linux-check-free-disk-space
You should get an output similar to this:
--2020-01-02 20:47:34-- https://fullstack-tutorials.com/linux/linux-check-free-disk-space Resolving fullstack-tutorials.com... 2606:4700:30::681f:5337, 2606:4700:30::681f:5237, 220.127.116.11, ... Connecting to fullstack-tutorials.com|2606:4700:30::681f:5337|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: “linux-check-free-disk-space” [ <=> ] 17,086 --.-K/s in 0s 2020-01-02 20:47:34 (57.6 MB/s) - “linux-check-free-disk-space” saved 
This makes the request, receives 200 OK header and then downloads and saves the HTML file linux-check-free-disk-space. Here it is important to see that the filename is the last part of the URL path, so here it does not have any extension even we know it is an .html.
We could rename it afterwards, but we can also send the
wget output to another named file, as follows.
$ wget -O ./page.html https://fullstack-tutorials.com/linux/linux-check-free-disk-space
-O stands for the output and the next argument is the target filename. After this command we should have the file
page.html containing the HTML content from URL.
You can easily check the file:
$ cat page.html <!DOCTYPE html> <html lang="en-US"> ...