Can’t chown /usr/local in High Sierra

/usr/local can no longer be chown’d in High Sierra. Instead use

sudo chown -R $(whoami) $(brew --prefix)/*
Advertisements

When executing jobs in Jenkins pipeline and a job is failed at night how do you resume it in the same night?

When executing selenium jobs in Jenkins pipeline and a job is failed at night how do you resume it in the same night?

Solution:

You can use Naginator plugin to achieve your intended behavior. Configure it as follows:

Install the plugin -> check the Post-Build action “Retry build
after failure” on your project’s configuration page.

If the build fails, it will be rescheduled to run again after the time you specified. You can choose how many times to retry running the job. For each consecutive unsuccessful build, you can choose to extend the waiting period.

AWS – Does Elastic Load Balancing actually prevent LOAD BALANCER failover?

I’ve taken this straight from some AWS documentation:

“As traffic to your application changes over time, Elastic Load Balancing scales your load balancer and updates the DNS entry. Note that the DNS entry also specifies the time-to-live (TTL) as 60 seconds, which ensures that the IP addresses can be remapped quickly in response to changing traffic.”

Two questions:

1) I was under the impression originally that a single static IP address would be mapped to multiple instances of an AWS load balancer, thereby causing fault tolerance on the balancer level, if for instance one machine crashed for whatever reason, the static IP address registered to my domain name would simply be dynamically ‘moved’ to another balancer instance and continue serving requests. Is this wrong? Based on the quote above from AWS, it seems that the only magic happening here is that AWS’s DNS servers hold multiple A records for your AWS registered domain name, and after 60 seconds of no connection from the client, the TTL expires and Amazon’s DNS entry is updated to only start sending requests to active IP’s. This still takes 60 seconds on the client side of failed connection. True or false? And why?

2) If the above is true, would it be functionally equivalent if I were using a host provider of say, GoDaddy, entered multiple “A” name records, and set the TTL to 60 seconds?

Thanks!

Solution:

The ELB is assigned a DNS name which you can then assign to an A record as an alias, see here. If you have your ELB set up with multiple instances you define the health check. You can determine what path is checked, how often, and how many failures indicate an instance is down (for example check / every 10s with a 5s timeout and if it fails 2 times consider it unhealthy. When an instance becomes unhealthy all the remaining instances still serve requests just fine without delay. If the instance returns to a healthy state (for example its passes 2 checks in a row) then it returns as a healthy host in the load balancer.

What the quote is referring to is the load balancer itself. In the event it has an issue or an AZ becomes unavailable its describing what happens with the underlying ELB DNS record, not the alias record you assign to it.

Whether or not traffic is effected is partially dependent on how sessions are handled by your setup. Whether they are sticky or handled by another system like elasticache or your database.

AWS S3 Can't do anything with one file

I’m having issues trying to remove a file from my s3 bucket with the following name: Patrick bla bla 1 PV@05-06-2018-19:42:01.jpg

If I try to rename it through the s3 console, it just says that the operation failed. If I try to delete it, the operation will “succeed” but the file will still be there.

I’ve tried removing it through the aws cli, when listing the object I get this back

 {
        "LastModified": "2018-06-05T18:42:05.000Z",
        "ETag": "\"b67gcb5f8166cab8145157aa565602ab\"",
        "StorageClass": "STANDARD",
        "Key": "test/\bPatrick bla bla 1 PV@05-06-2018-19:42:01.jpg",
        "Owner": {
            "DisplayName": "dev",
            "ID": "bd65671179435c59d01dcdeag231786bbf6088cb1ca4881adf3f5e17ea7e0d68"
        },
        "Size": 1247277
    },

But if I try to delete or head it, the cli won’t find it.

s3api head-object --bucket mybucket --key "test/\bPatrick bla bla 1 PV@05-06-2018-20:09:37.jpg"

An error occurred (404) when calling the HeadObject operation: Not Found

Is there any way to remove, rename or just move this image from the folder?

Regards

Solution:

It looks like your object’s key begins with a backspace (\b) character. I’m sure there is a way to manage this using the awscli but I haven’t worked out what it is yet.

Here’s a Python script that works for me:

import boto3 
s3 = boto3.client('s3')
Bucket ='avondhupress'
Key='test/\bPatrick bla bla 1 PV@05-06-2018-19:42:01.jpg'
s3.delete_object(Bucket=bucket, Key=key)

Or the equivalent in node.js:

const aws = require('aws-sdk');
const s3 = new aws.S3({ region: 'us-east-1', signatureVersion: 'v4' });

const params = {
  Bucket: 'avondhupress',
  Key: '\bPatrick bla bla 1 PV@05-06-2018-19:42:01.jpg',
};

s3.deleteObject(params, (err, data) => {
  if (err) console.error(err, err.stack);
});

How can I use same tasks yum with CentOS and dnf with Fedora on Ansible?

I use CentOS 7 on work and Fedora 28 on hobby, and I’m making an ansible’s playbook that installs some pakcage for them.

But CentOS uses yum and Fedora uses dnf. I know there exists yum module and dnf module, but separeted.

I’d like to write simple, how can I solve it? Could you please tell me.

Solution:

You could use the package module. It’ll sort out dnf vs yum behind the scenes. https://docs.ansible.com/ansible/latest/modules/package_module.html#package-module

Run script command on parallel

i’ve bash script which I need to run on it two command in parallel

For example I’m executing a command of npm install which takes some time (20 -50 secs)

and I run it on two different folders in sequence first npm install on books folder and the second
is for orders folder, is there a way to run both in parallel in shell script ?

For example assume the script is like following:

#!/usr/bin/env bash

   dir=$(pwd)

  cd $tmpDir/books/  

  npm install

  grunt

  npm prune production 
  cd $tmpDir/orders/

  npm install

  grunt

 npm prune production 

Solution:

You could use & to run the process in the background, for example:

#!/bin/sh

cd $HOME/project/books/
npm install &

cd $HOME/project/orders/
npm install &

# if want to wait for the processes to finish
wait

To run and wait for nested/multiple processes you could use a subshell () for example:

#!/bin/sh

(sleep 10 && echo 10 && sleep 1 && echo 1) &

cd $HOME/project/books/
(npm install && grunt && npm prune production ) &

cd $HOME/project/orders/
(npm install && grunt && npm prune production ) &

# waiting ...
wait

In this case, notice the that the commands are within () and using && that means that only the right side will be evaluated if the left size succeeds (exit 0) so for the example:

(sleep 10 && echo 10 && sleep 1 && echo 1) &
  • It creates a subshell putting things between ()
  • runs sleep 10 and if succeeds && then runs echo 10, if succeeds && then run sleep 1 and if succeeds && then runs echo 1
  • run all this in the background by ending the command with &

Check if All Values Exist as Keys in Dictionary

I have a list of values, and a dictionary. I want to ensure that each value in the list exists as a key in the dictionary. At the moment I’m using two sets to figure out if any values don’t exist in the dictionary

unmapped = set(foo) - set(bar.keys())

Is there a more pythonic way to test this though? It feels like a bit of a hack?

Solution:

Your approach will work, however, there will be overhead from the conversion to set.

Another solution with the same time complexity would be:

all(i in bar for i in foo)

Both of these have time complexity O(len(foo))

bar = {str(i): i for i in range(100000)}
foo = [str(i) for i in range(1, 10000, 2)]

%timeit all(i in bar for i in foo)
462 µs ± 14.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit set(foo) - set(bar)
14.6 ms ± 174 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# The overhead is all the difference here:

foo = set(foo)
bar = set(bar)

%timeit foo - bar
213 µs ± 1.48 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

The overhead here makes a pretty big difference, so I would choose all here.