Download File From URL in Linux

Last updated on Jan 2, 2020 in Linux

Imagine that you have a file in URL and you need to download it from the Linux command line. You can use a simple wget http client which can be found in many Linux distros.

Wget is basically a command line web browser without graphical presentation - it just downloads the content, be it even HTML, PDF or JPG, and saves it to file.

Download File With ´wget´ Command Line HTTP Client

Let’s assume that your resource is In your terminal, type:

$ wget

You should get an output similar to this:

--2020-01-02 20:47:34--
Resolving 2606:4700:30::681f:5337, 2606:4700:30::681f:5237,, ...
Connecting to|2606:4700:30::681f:5337|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “linux-check-free-disk-space”

    [ <=>                                 ] 17,086      --.-K/s   in 0s

2020-01-02 20:47:34 (57.6 MB/s) - “linux-check-free-disk-space” saved [17086]

This makes the request, receives 200 OK header and then downloads and saves the HTML file linux-check-free-disk-space.

Verify Downloaded File

You can easily check the file:

$ cat linux-check-free-disk-space
<!DOCTYPE html>
<html lang="en-US">

How To Make Just a Request - Without Download?

In this case, you could check the article: Request URL With wget Without Saving File