Configuración VPS y despliegue web con Ubuntu, Django, Nginx y Postgresql (English subtitles)

0
3


Greetings to all, my name is Frank Mascarell and this is a video that I originally made only for my use, to be able to consult it when I had to reconfigure a VPS, since many steps are required and I’ve thought about sharing it if someone can come in handy. Well, first I want to apologize for the low quality of the sound, it has some echo although I have edited it but it can not be done anymore, since I have a pretty bad micro, I bought it for testing, since it’s not my intention to dedicate myself to making videos. But it will fulfill its purpose. Having said all this, I’m going to the subject. I have prepared quickly a graphic to show the general idea of how a VPS could be configured, based on Digital Ocean services, which are quite good and economic. When a VPS is contracted, we can divide it into individual and autonomous spaces, for example, we can have a space configured to serve a dynamic web page, another space with another web page, and another as a store of static data. DO calls these spaces Droplets, with the analogy that there are many “drops” in an ocean. In this example image, there is a VPS with two droplets, the droplet London has been installed as OS the Ubuntu 16.04, and as global packages Pip, postgres, virtualenv and PyFilter. At the same time this droplet contains two virtual environments, with local packages installed of different versions and a web application of django, this is useful to test new versions in an isolated environment and independent or a virtual environment, while we keep the old versions but functional. The droplet Amsterdam contains exactly the same, but with only one virtual environment. With this approach in mind, we can deduce the great scalability that we can achieve. For example, we can have the same web page in different droplets and then use a load balancer to direct traffic according to our convenience, for example If there are too many on the London server requests, we can redirect them to the server Amsterdam to share the load between the two servers. To prepare all this, I wrote a screenplay of what I will do in this video. I will explain every step while we click code from the shell. If you want, pause the video and take a look. I will leave the necessary links below. Mostly they are from the Digital Ocean documentation. PuTTY is a small application to connect to the server through SSH authentication, consists of a pair of keys one public and one private with encrypted data transfer. SSH is the safest system at the moment. Download the latest version of PuTTY, the .msi package that will also install PuTTYgen and Pageant. After installing PuTTY on our Windows, we will create a pair of keys with PuTTygen and we will save them. To do this, leave the default key type (RSA 2048) and press Generate, move the cursor over the space blank until the key is generated. Keep the public and private keys, in this last one it warns us if we want to keep it without a secret phrase, I recommend not to put it because some automatic services of DO do not allow to enter the secret phrase for proper authentication. Another clarification; as the droplet will have installed Ubuntu which in turn has integrated OpenSSH, and uses a public key format different to the PuTTY format, to upload it to the DO account you must copy the public key from PuTTYgen and not from the .txt file with the format that PuTTY uses. If you have closed PuTTY you can reload the private key to copy the public. Now it’s time to register and create our droplet. Two-factor authentication to add
more security to our account is recommended, although at the moment for tests I’ll deactivate it so I do not have to wait to the mobile message every time you want to enter. We will add our SSH key by copying and pasting. Remember that you have to copy the public key from PuTTYgen and not from the file with the .txt extension. Within the panel to create our droplet, I will choose as OS the latest LTS version of Ubuntu, which at this moment is 18.04. The LTS or Long Term Support versions, that are released every two years and offer support for 5 years, they are the most recommended for production. In the image it seems that it is not changed, but it seems internally yes. Then I will check it. Now I choose the smallest size for my droplet, because it will only be for tests, and then the region of the data center that will be London, the closest to Spain and theoretically the one that will have lower latency, that is, generally the time that the server takes to give us an answer after to send you a request, such as a web service. We include the SSH key that we created previously in our account, we put a name to the new host. After creating the droplet, let’s see how their properties were, we have the IP address on the right which is what we will use for most operations in the server configuration. As you can see it has been created with the correct Ubuntu version. We are going to connect to the droplet with PuTTY. It’s quite simple, first we put the IP address of our droplet here, then we go to the section SSH-Authentication and select the same private key that we use to add it to our DO account. We can also configure the appearance of the terminal, change the font of the text and the colors, as I did. And once configured everything to our liking, we insert a new name for the session and save it, so later we can connect quickly just double click on the name of the session. Once we have the terminal or the shell open, we log in as the root user to access to our server with all the privileges. Working on a day-to-day basis with the root user is not recommended, since it carries the danger of being wrong in some command, and we could delete some file or something worse. The usual thing is to create another user with privileges that Ubuntu gives by default, which are the minimum, when for example we have to install a package, where in principle only the root user can do it, we will use the sudo command that we will now see. As you can see when logging in as root, does not ask for any password since we’re connected via SSH from PuTTY, and so he specifies it in the first line, where he is indicating us that the server is using the public key stored in it. The next step is to create another user with minimal privileges. It will ask us for the password and other optional information. Now we can tell Ubuntu that the user frank you can execute commands with root privileges, putting the sudo command in the instructions. We can check all the packages you have installed the system through apt list. Now we will update the list of packages available for Ubuntu through sudo apt update and update all packages installed using sudo apt upgrade. An important clarification; When we created the droplet with the embedded SSH key, we tell the server that does not admit any authentication with password, only with SSH. After creating the user frank, some may think to connect using PuTTY and login with frank and enter the password previously supplied. This will not work, the server will deny it, As is logical, we do not want hackers to attack an authentication system so easy to destroy. We can activate password authentication from DO Control Panel, but it is not recommended. So, the only way to enter as a user frank, first log in as root and then switch to frank using the su – frank command, and we can see it does not ask for the password either, since we do it from the administrator user of the server with all the privileges and does not need the passwords of the other users. However, when we are logged in like any other user that is not the root user, the system can deactivate it automatically, After a period of inactivity, this is for safety. If this happens, then it will ask for the password of the user in question. At other times, and also for safety, Ubuntu will ask us for the password, like for example when trying to execute some important commands, how to install a package. Now we will configure the basic firewall included in Ubuntu, which by default is disabled. We can check this by typing ufw status. As you can see, access to the firewall is only available from the root user, so we’ll have to tell him the sudo. The firewall is inactive. Before activating it, we will add a rule to allow SSH connections through OpenSSH integrated in Ubuntu and that is in charge of managing this type of connections. Now we can activate the firewall without danger of blocking ourselves. We check again and see that they were added two rules for OpenSSH. We will install pip3 from the list of Ubuntu packages, although for some reason it will install version 9 that is not the last, so we’ll do a little trick. I’m going to stay inside the root user, because I’m going to install some packages needed for python, with the pip package manager, we installed it that way. I’m not going to do it because I already have it installed. Now we can list all the python packages installed with freeze. These are the packages installed when we created the droplet. Then we install pip-review that will help us update all hit packages. This is a way to do it, interactively to be able to review the packages one by one. I already updated them, but it still shows me packages that are still to be updated, and that having dependencies with other packages does not update them. We can check that there is no broken package with pip3 check. The next step will be to install PyFilter as protection against SSH attacks. We will install this package from the user frank, cloning a git repository from the cloud. These first steps I already made, so I’ll show you but I will not execute them. Then we will move the PyFilter folder that was created to usr/local. We go to this folder and inside we have another folder called Config where the config.default.json file is located and we will make a copy because we are going to modify it. If we enter the Config folder we will see the two files, the copy and the original. We return to the PyFilter folder, inside there is a file run.sh which is the executable of PyFilter,
we will give you permissions and we will execute it. Once verified that it works well, we will create a service for PyFilter. We go back to the PyFilter folder, inside we have an install.sh file that creates the service and enables it to run at the beginning of the system. We check the status and we see that the service works correctly. The next step is optional, I will install a package from root so that PyFilter also records the location of the attacks. Restart so that PyFilter recognizes the package. Inside the Config folder of PyFilter is the black list of the IPs that PyFilter blocked indefinitely. PyFilter is configured by default to block IPs with failed attempts in less than 5 seconds, so most will be robots. This has been detected here. PyFilter adds the appropriate rules in the iptables firewall. We will install and configure Postgres as the engine database for Django that is recommended. By default postgres does not allow remote connections, that we will not need for our project, since we will connect to the DB through the Nginx server and the django application, so it’s not required no protection by postgres. We update the package index, we check the packages that need updating and update them. We install postgres. We check that we can access the console through of the default user postgres. We created a new role with superuser permissions for the frank user. Finally, we create a DB that I will call LibrosWeb so that it has the same name as the DB of my local test project, and we check that we can connect to it without problems. This is going to be all that we will do with the DB, later when the local django project is uploaded to the server, we will perform the DB migrations automatically, with all its tables and records. We will install the necessary packages to manage the virtual environments, virtualenv and virtualenvwrapper which is an extension of the first, and that it does the same but with new more practical commands. Only virtualenvwrapper that automatically also will install virtualenv and other necessary packages. We will also install it from the Ubuntu package manager, because if we install it from pip3 then we would have to modify some files to work properly, and in this way it is not necessary. I already have it installed. Now we can change a secure user to create a virtual environment with mkvirtualenv and the name of the environment. This will create and activate it at the same time. As you can see, a virtual environment with an executable file has been created of python, that if we do not indicate the version, will install the same one that has the system, in this case the 3.6.5, also installed the pip to be able to install other packages within the environment, for example Django that we will install next. Now we have the environment “envDjango20” activated and it tells us at the prompt. An important clarification; within an activated virtual environment we will install the packages with pip and not with pip3. We can see that within the environment there is no package installed, as expected, since it is independent of the system. With “workon” we will list the available virtual environments. Then we will install django within the environment, but first a folder to place the django project
that I will call it LibrosWeb. It is important to have the environment activated to install the packages, otherwise I would install them in the system globally, and it’s not what we want, we want a separate virtual environment for each version of Django or Python. If we do not tell you the version will install the most recent, which is currently 2.0.6. Next we will add a rule to the firewall and we will open port 8000, which is the one used by the django web server. We will create a django project with startproject inside the LibrosWeb folder. This creates another folder with the name of the project. Within this, we have the file manage.py and another LibrosWeb folder with the rest of the files, of which we are going to modify settings.py. I will use the nano text editor, here we will only have than to add in the PERMITTED HOSTS the IP address of our VPS. Another of Django’s important recommendations, is to save the secret key, which is used to encrypt the transfer of data in production, in an environment variable. To do this we add a command in the .bash_profile file, so that every time we log in as a user frank, Load the variable. Then we restart the user to run again .bash_profile and modify the settings.py file to read SECRET_KEY from the environment variable that we just created. Finally, we started the django web server with runserver to test the application. Now we can access from our browser-client to the application of django. The next step will be to connect the DB LibrosWeb with the django application. We are going to apply some configurations recommended
by django for the role of frank. We connect as a postgres user and we will modify the following ones properties of the frank role. Then we activate the virtual environment to install another package that django needs to connect to postgres. Now we edit the settings.py file to tell you the parameters of the DB that has to be connected. When we created the frank role with the createuser command, since I was logged in as frank, he did not ask me for any password, so he created one automatically, let’s change it and put the same one that we put in settings.py. Now we can migrate from django the structure of the correct DB for postgres. Then we created an administrative superuser for the django application and we start the server to test the application. If we add /admin to the end of the browser’s IP address we will enter the administrative interface of the application where we will login as the superuser we created for django, so we will have full access to the DB, which now
It is empty, there are no tables or records, only the skeleton. Well, so far we have basically configured a server to be able to work with django in test mode and safely. The web server integrated in django is very good to do the tests of our website, however for production we will need a more powerful and secure web server, like the well-known Apache or the novel Ngnix that
It is taking away ground from the first. In my case I will use Nginx, I will install it and configure it. This would be the next step. But first I want to make a paragraph in this configuration, I want to add more security to the administrative website, that is, to the web page that will be used by the company administrator to access the DB. In the beginning the administrative webs are implemented through the internet, to access them we open the browser on our computer, we enter the prepared URL and enter the first page of the authentication, This will ask us for a username and password, We can also add authentication in two steps, the second step would be to send a code to the mobile to then put it in the login process, then once connected using the https protocol we would have a secure connection through SSH encryption and certificate. This is quite safe, although we open a door for that hackers or robots try by brute force to break the password, with which they could access the company’s database. So, the background idea is the following: the administrator will connect to the server directly using PuTTY-SSH, when the server detects that this administrator has connected, a script which will perform the following actions: will first activate the virtual environment, second will launch the django web server using runserver and finally run the google-chrome browser remotely, that is, on the administrator’s team, opening the
administrative web page. In this way, we will have an SSH + User and Password authentication. To curl more the curl, add more security and make things easy for the administrator, assuming you do not have any computer experience with servers or applications, It would be ideal to create a pendrive so you can take it on your keychain, and that you only have to connect the pendrive to your laptop or mobile and automatically open the browser with the administrative website. Es menos probable perder o que te roben las llaves de casa, than the laptop or the mobile. It is better that the private key is not in either of the latter two. But if still the client prefers not to carry a pendrive for connect and use the laptop, and run the risk that hackeen and they steal the private key, then the process it’s the same as we will do now, the only difference is that the entire implementation will be on your computer instead of on the USB. The first thing we should do is change the user’s SSH configuration modifying the sshd_config file, we change port 22 which is the default and we put any other. After activating the use of X11, we save and restart the sshd service. Now we download Xming to install it on our Windows. Remember to give permissions to the firewall for the ssh port. I will create an administrative user and we will work with it from now on. We created a couple of keys for this user and we keep them. We put the public key in authorized_keys. Then we putted a new session with all its data, and activate X11 in the corresponding section. Now we can launch chrome to test
that everything is fine, and the browser will open on our computer. We continue to configure the new user, we created the virtual environment with the same applications that we had in frank. We also copy the WebWeb test project. We started Xming on Windows and tested google-chrome remotely. It throws us some errors because chrome requires of some features of the GPU, which my server does not have, but it works well. The next thing is to automate the process by changing the user’s bashrc so that the virtual environment is activated when
connect Then we created the python script using 3 subprocesses; the first popen is asynchronous, does not wait to finish to execute the next instruction, since the django web server must always be running. The second opens chrome with the specified URL that we want, the logical thing is to open in the login window. This is a run thread that will wait to finish to execute the next instruction, which is about killing the process that launched the django server, so that it does not stay running after closing the connection. Finally we check that when we connect with putty the browser will finally open. Now we will implement this automation on a pendrive. For this we will need an autorun.inf file in
the pendrive that will start putty-portable. Keep in mind that since the Windows 7 version deactivated the possibility of starting any program automatically autorun, for security reasons, so
we must install a small program that monitors the devices when they connect, such as APO USB Autorun. First we download and install putty-portable in the pendrive, it is important to install it in the root directory of this or it will not work. We started it and, we created and saved a practically identical session which we have in the putty installed in windows, the only variation is that the private key will be in the pendrive itself and not on the hard drive. Then we create the autorun.inf file so that it executes in the shell the command that will start
putty passing him the parameters of the session of the administrator saved previously. Finally after installing the APO USB Autorun, we will place it in the Windows Start folder, together with Xming so that they start both
when Windows starts. If all is well and the two programs are underway, now we can introduce the pendrive so that
start everything We continue with EN-gin-ex or EN-gai-nex as others call it, this will be the final web server that will be working in production, and we will dispense with the django web server, which is only used for tests. For Nginx to send the requests to Gunicorn, we will do it through a unix socket that will start with the start and will remain listening until receive a request, and we will send it to gunicorn. Let’s start updating the system. We install nginx and curl. Curl is a utility that will help us test connections with Nginx. We also install pip-review and update pip, that is telling us that it is outdated. We will also update django. and the other packages tells us that they are outdated. I’m going to create an alias to start the django test server more comfortably. Finally we installed gunicorn in our environment, that we will try it by initiating it to see if it is capable of serving the project. Now we create the socket files and systemd service. We see that we have the PyFilter service that we created at
the beginning and the sshd service among others. For each socket file, a service file must exist with the same name except the extension, which describes the service to start the incoming traffic in the socket. We start the socket and activate it so that it start at startup automatically. We see that the socket is active and listening, that the service is inactive and will be activated with a request
that we will try with curl, and the service will send it to gunicorn, who will respond with the HTML of our django application. Finally we configure nginx, open a new server block the sites-available directory and then we link it
to the sites-enabled directory. It would only be to configure the firewall, eliminating the permissions from port 8000, which only uses the django web server for testing, and giving new permissions for nginx. Here I get some errors because I can not find the static files, that’s because nginx does not serve static files. There is also a warning that tells us that we are not connected as https and it is not a safe way, this will be what we will see after. To solve the problem of static files, it is best to install the Whitenoise package, which will allow Our web application will serve your own static files,
without relying on nginx. Later we must configure django
to work with Whitenoise and the static files, adding what you will see below. Although it is not necessary we will also add the Brotli package, to manage files in this compression format for https, that reduces the loading speed with respect to gzip. We execute collectstatic to place
automatically all Required static files included in django, in the /static/ folder as we defined in the settings.py settings. Now he recognizes us style sheets and so on, and applies them. It warns us that we are not connected through https and it is not safe, but let’s change this now. We will need an SSL certificate to be able
to serve the web application through HTTPS. To test this we will create a self-signed SSL certificate that does not require any domain, and a strong group to strengthen the connection with the client. Some developers can feel comfortable using the nano bash editor to create and modify text files, but I prefer to use a more comfortable
and manageable graphical interface. For this we can use WinSCP which is an application to connect through SFTP-SSH to our server and manage the files. It has two modes of working windows,
now I’m using as the Windows Explorer, and you can change it from here. We can also view and change the properties of a file easily. To edit a text file we do it with a click
right or double click, en mi caso uso el Notepad++. I’ll change a journalctl configuration variable so that I only save the records of the last 4 days, ya que tengo +400Mg ocupados solo con el diario, that keeps all records of all system events and services. We can also filter it by busy size, and by other parameters. Below I will leave a link for more details of the journal. We save the file and see how WinSCP makes the transfer to the server. For this to take effect we must restart the system. In the case that you have the VPS in Digital Ocean,
you could have problems when reconnecting, denying the server the connection. This is normal and has an easy solution. What DO is doing for security is to activate the firewall on every reboot, although we deactivated it previously before the reboot. This forces us to enter our DO account, open the console of our droplet, connect as root or another user with sudo privileges, to deactivate the firewall. Then we can connect from putty and activate the firewall. Two commands, one to see the occupation of the journal on the disk, and another to empty everything except the last two days. We will create a startup application for our project. We see the structure of the folders and how you have created an apps.py file where you added a class with the name of the application that we will copy and paste into the settings.py file. Then we create a view to start that will return this little text. Now we create a file urls.py in the application to manage the requests, when the user requests with only the IP address and nothing else, will show the home page. And this would be the project urls.py file where it links to the home page. Something important to note is that every
time a file is modified from the django project, we will have to restart the wsgi server
of gunicorn so that the changes take effect, and we do so. And if we also modify the nginx configuration file, we must restart this also for the same reason. First we make an error test and if everything is correct, we restart the server. Notice that if we make an application request through http, nginx will redirect you to safe mode https, and if the request is without anything else after the IP then redirects it to the home application. We will create an html page for tests and we will place all the templates within a new directory. If you use Notepad and characters with accents, activate the encoding for UTF-8. We also create inside the folder of static files, a new folder for the contents of the home application and we will place an image to use as an application icon. Modify the settings.py file to tell you where the root templates folder is, and the file from the start view to tell that no longer shows the previous message, commenting, but link to the template index.html. For those who prefer to use the pendrive as the only means to connect to the application as an administrator, we must reconfigure nginx to redirect requests from the URL 15.15.15.15/admin/ so that it is only accessible from the IP
local 127.0.0.1. I have removed the checkout of the application icon check because we already have one and we indicate in the template where to find it. A new server policy to tell you that all requests from the internet with /admin redirect them to the home page, and all requests from localhost to the gunicorn socket. Remember to restart it for apply the changes. We check that the URL 15.15.15.15/admin redirects us a la página de inicio como estaba previsto it can no longer be accessed from the internet. There is also no error message due to the lack of an icon, that we have it up here with the name
of the page. Finally, we would have to modify the start.py file that starts when we connect to the server as a gela user, and open the google-chrome browser locally. We comment the lines that start the django test server that we no longer use, and the line that kills this process. And we changed the port to 443. Now we can try to introduce the pendrive and the default administrative page will appear what is the login to be able to enter, and as we have not added any icon to this template it throws us again the error of lack of icon, but this we already know how to solve it truth? Well, at this point, we already have a VPS configured from Windows, with a web application ready for production. It has been a long road to start and understand each of the components integrated in a web server, and now we can create a snapshot as backup copy and as a basis for any other website that we want to create. This video has spread much more than I thought, so I hope I do not bore you too much. Until next time, greetings.

LEAVE A REPLY

Please enter your comment!
Please enter your name here