In this article, We will develop a simple application with Asp.Net Core. Our goal is after pushing the codes to Github server, automatically deployment will be start.

Our stages will be like the above picture. I will push the codes to Github. After that I will trigger Jenkins on Ubuntu machine with a WebHook. At Jenkins, the codes will be fetching, the tests will run and the Docker image will be create. Then Containers will be run via the docker compose. In Ubuntu Virtual Machine, Nginx listening to port 80 and then Nginx will redirect the requests. I will do test, build and deploy on the same machine.

I will not give very detailed description of the these technologies and tools because the purpose of the article, implement CI (Continuous Integration). It may be useful to know the following topics:


-DevOps, Continuous Integration,Continuous Delivery, Jenkins

-Nginx ve Reverse Proxy

-Asp.Net Core

1)Creating a simple Asp.Net Core application

I will create an application using cli in MacOS, You can also create project in Windows without using cli with Visual Studio.

In Core.Web, you can run  dotnet run command to see doing right. Let’s write a little unit test.

dotnet add reference ../CoreApp.Web/CoreApp.Web.csproj  command to add our project as a reference to the test project.

dotnet add package Microsoft.AspNetCore.All Don’t forget to add this package

In our test, We verify the type of the value returned by the Index method.  dotnet test command runs our tests.

Now we create the Dockerfile and .dockerignore


We created a Docker image in the previous step, We will start up this image as 3 containers, the purpose of this step balance the incoming request to the site.

2)Docker, Jenkins ve Nginx settings on the Server

I will launch an ubuntu machine on Amazon Web Services to test the application.  As in the below picture, add new rules for port 80 and port 8080 to allow traffic. By the way there is a warning under the rules, You can only allow your own ip address for port 8080. You can also make this setting later in Security Group.

-Nginx, if you dont want to create an authorized user, with  sudo su  command, you can work via root. For Nginx installation, you can look this tutorial, after that you can test with the browser using public ip or public dns.

Configure default file in the  cd etc/nginx/sites-enabled/  directory. service nginx restart Dont forget to restart Nginx.

In this stage Nginx will give an error because there is no application listening to ports.

-Docker installation, you can look this tutorial. Then  apt install docker-compose install docker compose.

-Jenkins installation, you can look this tutorial. Then create a new job.


In the following two pictures, we added Github Repository link. When the codes pushed to the master branch, Jenkins will be triggered with Hook. This is a simple auto deploy example but in the real world, stages can be like this; When the codes push the branch Jenkins will be triggered, Build and test stages are done then Jenkins push the codes release branch and deploy the servers.

Add the following commands to Execute Shell section, if you get a permission error at Jenkins to run the commands, you can add  jenkins ALL=(ALL) NOPASSWD: ALL to  sudo visudo(/etc/sudoers) file and dont save as temp, overwrite the file.  service jenkins restart start Jenkins again.

The shell commands will do the following stages; Firstly tests will run and if we don’t get an error, previous docker containers will be stopped then Docker will create a new image. After that containers will be run to listen to ports. By the way there is a bad situation, Docker’s working principle, many untagged images are created during this processes.  docker rmi $(docker images | grep "^<none>" | awk "{print $3}") command, you can run it manually or automatically to delete unnecessary images.

Let’s add a hook to Github repository.

Now the stages completed, you can watch the demo. (video language is Turkish)

As I mentioned in the video, this is not easy to use at production level. For example the first problem that comes to mind is photos. Where to save photos uploaded from the user, cant be in the container. Because the container will be deleted in the next release. So uploaded photos must be stored in another folder, Also Nginx would be better choice to direct static files because Nginx caching mechanism is much better. Likewise, where to store user session information should be considered. Because containers working independently of each other. When the next request is directed to another container, session information will be left in the previous Container so user session informations needs to be kept in a common place, not in the container. I will write articles about these issues.


Source Codes :