Does a curl/wget request respond on a random ephemeral port?

When I was setting up VPC in aws, I had created an instance in public subnet. The instance was not able to ping to google and was giving timeout when connecting to yum repository.

The security groups were open with required ports. When I edited the ACL to add ICMP from 0.0.0.0/0 in inbound the instance was able to ping to google. But the yum repository was still was giving timeout. All the curl/wget/telnet commands were returning error. Only ping was working.

When I added the following port range for inbound in ACL 1024-65535 from all 0.0.0.0/0 that is when the yum repository was reachable. Why is that?

The outbound traffic was allow all in ACL. Why do we need to allow inbound from these ports to connect to any site?

Solution:

In AWS, NACLs are attached to subnets. Security Groups are attached to instances (actually the network interface of an instance).

You must have deleted NACL Inbound Rule 100, which then uses Rule *, which blocks ALL incoming traffic. Unless you have specific reasons, I would use the default rules in your NACL. Control access using Security Groups which are “stateful”. NACLs are “stateless”.

The default Inbound rules for NACLs:

Rule 100 “ALL Traffic” ALL ALL 0.0.0.0/0 ALLOW Rule * “ALL Traffic” ALL ALL 0.0.0.0/0 DENY

Your Outbound rules should look like this:

Rule 100 “ALL Traffic” ALL ALL 0.0.0.0/0 ALLOW Rule * “ALL Traffic” ALL ALL 0.0.0.0/0 DENY

When your EC2 instance connects outbound to another system, the return traffic will usually be between ports 1024 to 65534. Ports 1 – 1023 are considered privileged ports and are reserved for specific services such as HTTP (80), HTTPS (443), SMPT (25, 465, 587), etc. A Security Group will remember the connection attempt and automatically open the required return port.

 

Advertisements

Automating Slapd Install

You could execute the following command:

export DEBIAN_FRONTEND=noninteractive
debconf-set-selections <<< ‘slapd/root_password password 123123’
debconf-set-selections <<< ‘slapd/root_password_again 123123’
apt-get install slapd ldap-utils -y

Or for a more complex installation you can use:
cat > /root/debconf-slapd.conf << ‘EOF’
slapd slapd/password1 password admin
slapd slapd/internal/adminpw password admin
slapd slapd/internal/generated_adminpw password admin
slapd slapd/password2 password admin
slapd slapd/unsafe_selfwrite_acl note
slapd slapd/purge_database boolean false
slapd slapd/domain string phys.ethz.ch
slapd slapd/ppolicy_schema_needs_update select abort installation
slapd slapd/invalid_config boolean true
slapd slapd/move_old_database boolean false
slapd slapd/backend select MDB
slapd shared/organization string ETH Zurich
slapd slapd/dump_database_destdir string /var/backups/slapd-VERSION
slapd slapd/no_configuration boolean false
slapd slapd/dump_database select when needed
slapd slapd/password_mismatch note
EOF
export DEBIAN_FRONTEND=noninteractive
cat /root/debconf-slapd.conf | debconf-set-selections
apt install ldap-utils slapd -y

The possible attributes for debconf-set-selections are defined in the slapd.templates file in the debian package, together with a description of what the configuration attribute is about.

For slapd on Debian Jessie, you can find the file here: https://anonscm.debian.org/cgit/pkg-openldap/openldap.git/tree/debian/slapd.templates?h=jessie

 

Convert jenkins job/project to pipeline

Install the plugin: Convert To Pipeline Plugin

The plugin provides a link on the left menu at 3 locations:

Root Level
Folder Level
Freestyle Job level

 

Click on the link at any given level and a UI similar to below will appear:

Usage

  1. Click on a link at Root level or Folder level or Job level.
  2. Select the job from the drop-down list that is the beginning point of the “chain”. If job level link is clicked, this drop down list will not be visible.
  3. Provide the new pipeline job name. If this is not specified, the plugin will attempt to create a new pipeline job with the naming convention of “oldname-pipeline”.
  4. Check “Recursively convert downstream jobs if any?” if you wish to have all the downstream jobs converted into this new pipeline. The plugin will write all the logic of current and downstream jobs into a single pipeline.
  5. Check “Commit Jenkinsfile?” if you would like the plugin to create a Jenkinsfile and commit it back to the SCM. The plugin will commit the Jenkinsfile at the root of the SCM repository it finds in the first job (selected in step 1 above). It will attempt to commit to this repo using the credentials it finds in the first job.
    1. Do note that the plugin will checkout the repo in to a temporary workspace on the master (JENKINS_HOME/plugins/convert-to-pipeline/ws). Once the conversion is complete and Jenkinsfile is committed back to the repo, the workspace will be deleted.
  6. Click “Convert” to convert the Freestyle job configurations in to a single scripted pipeline job. Once the conversion is complete and the new job is created, you will be redirected to the newly created pipeline job.

Jenkins pipeline: git branch as parameter

Install the Active Choices Plug-in in Jenkins-> Manage Jenkins -> Manage Plugin

Then in the Jenkins pipeline add the parameter “Active Choices Reactive Parameter”
Add the name as BRANCH

Then select the groovy script option. Then in the script section add the following:

tags = []
text = "get_git_branches.sh git@bitbucket.org:kompeld/k-apid.git".execute().text
text.eachLine { tags.push(it) }
return tags

In your jenkins machine create the following script:
vi /usr/local/bin/get_git_branches.sh

#!/bin/bash
GIT_URL=$1
git ls-remote –heads ${GIT_URL} | sed ‘s?.*refs/heads/??’

Then in the groovy pipeline you can access this variable as params.BRANCH

python pip broken on ubuntu: forcing reinstallation of alternative /usr/bin/pip2 because link group pip is broken

I got the same error. I did this and it worked!

sudo apt-get install --reinstall python2.7

This to reinstall python. Don’t ever try to uninstall python ,it will crash your OS as part of Ubuntu is dependent on python.Then,

sudo apt-get purge python-pip

This is to remove pip.

 wget https://bootstrap.pypa.io/get-pip.py

Installs pip..`

sudo python get-pip.py

Then,you can install packages using pip like

sudo pip install package-name

change python for pip

You can check which python is configured with pip with command:
pip –version
pip uses the following package name python$VERSION-pip
The python specific pip will be installed, then you can change the default pip as:
update-alternatives –install /usr/bin/pip pip /usr/bin/pip2 1

update-alternatives –config pip Then select the pip version you want.

Kubernetes – Can I start a pod with a container without any process?

We have a docker image. And am trying to deploy it using kubernetes. My doubt is can I deploy a pod with a single container but not run any process in the container while the container comes up? But run it after it starts. That is, after the container starts, go into the bash of the container, and run the process(lets say a java process)? Is that possible?

Right now, when I am trying to deploy a pod with no process running, I get this error :

Back-off restarting failed docker container Error syncing pod, skipping: failed to “StartContainer” for “containerName” with CrashLoopBackOff:

But when I start the container with a java process, it works. Am not sure if its because of no process in container?

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app-container
    image: app-image:version
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]

You could then run your process BUT:

  • You container will not be bound to the seconds process and would not end when your second process ends
  • You have to do manual work
  • You could save if you’d just run your application in the command of the container