Does AWS EC2 Auto-Scaling add new instance exactly like current one?

When EC2 auto-scaling group adds a new instance, is it adding one exactly like the current instance (assuming you only have one as a baseline) including any changes you have made post launch, or does it start one that is identical to the first instance’s initial state?

Solution:

It will launch whatever AMI was specified in the Launch Configuration. All instances launched by the ASG launch configuration will be identical to the state defined by the AMI.

See
https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html

Advertisements

Provide password for running a script inside another script as different user

Imagine you run script as user A: sudo -u A ./script.sh
Inside that script.sh we have a line that calls another script script2.sh with a different user B: sudo -u B ./script2.sh. Now I get prompted to enter a password for user B. Is there any way to provide that password uppon call?

P.S. I know all the security breaches it can create. Please do not mention me that.

Solution:

Yes you have option for that. But that password you have to provide at the time of executing the script or hard coded it in the script itself. Try with the below example:-

echo 'passwordofB' | sudo -u B -S ./script2.sh

Also you can do it like:-

 sudo -u A ./script.sh passwordofB #as a command line parameter

now inside script.sh:-

echo $1 | sudo -u B -S ./script2.sh

You are executing another script sudo -u B ./script2.sh from script ./script.sh right? So change that line with echo $1 | sudo -u B -S ./script2.sh and run your first script as sudo -u A ./script.sh passwordofB where passwordofB is the password for user ‘B’

how to count occurrence of specific word in group of file by bash/shellscript

i have two text files ‘simple’ and ‘simple1’ with following data in them

    simple.txt--

    hello
    hi hi hello
    this
    is it

    simple1.txt--
    hello hi
    how are you



[]$ tr ' ' '\n' < simple.txt | grep  -i -c '\bh\w*'
4
[]$ tr ' ' '\n' < simple1.txt | grep  -i -c '\bh\w*'
3

this commands show the number of words that start with “h” for each file but i want to display the total count to be 7 i.e. total of both file. Can i do this in single command/shell script?

P.S.: I had to write two commands as tr does not take two file names.

Solution:

This alternative requires no pipelines:

$ awk -v RS='[[:space:]]+' '/^h/{i++} END{print i+0}' simple.txt simple1.txt
7

How it works

  • -v RS='[[:space:]]+'

    This tells awk to treat each word as a record.

  • /^h/{i++}

    For any record (word) that starts with h, we increment variable i by 1.

  • END{print i+0}

    After we have finished reading all the files, we print out the value of i.

How can I apply a function to itself?

Suppose I have function, f, which takes in some variable and returns a variable of the same type. For simplicity, let’s say

def f(x):
    return x/2+1

I’m interested in applying f to itself over and over. Something like f(f(f(...(f(x))...))).

I could do this like

s = f(x)
for i in range(100):
    s = f(s)

But I was wondering if there was a simpler, less verbose way to doing the same thing. I wan’t to avoid for loops (just as a challenge to myself). Is there maybe some way of using map or a similar function to accomplish this?

Solution:

Is there maybe some way of using map or a similar function to accomplish this?

Not map, but reduce. I wouldn’t use it for this, but you could call reduce on an n-item sequence to cause f to be called n times. For example:

>>> def f(x):
...   return x+1
... 
>>> reduce(lambda n,_: f(n), range(100), 42)
142

Explanation:

  • n is assigned each successive return value of f.
  • _ is the list of numbers from range(100). These numbers are all ignored. All that matters is how many there are.
  • 42 is the starting value.

100 nested calls to f(f(f...(f(42))...)) results in 142.

make operators overloading less redundant in python?

I’m writing a class overloading the list type.
I just wrote this and I’m wondering if there exists any other way less redundant to do it :

class Vector:
def __mul__(self, other):
    #Vector([1, 2, 3]) * 5 => Vector([5, 10, 15])
    if isinstance(other, int) or isinstance(other, float):
        tmp = list()
        for i in self.l:
            tmp.append(i * other)
        return Vector(tmp)
    raise VectorException("We can only mul a Vector by a scalar")

def __truediv__(self, other):
    #Vector([1, 2, 3]) / 5 => Vector([0.2, 0.4, 0.6])
    if isinstance(other, int) or isinstance(other, float):
        tmp = list()
        for i in self.l:
            tmp.append(i / other)
        return Vector(tmp)
    raise VectorException("We can only div a Vector by a Scalar")

def __floordiv__(self, other):
    #Vector([1, 2, 3]) // 2 => Vector([0, 1, 1])
    if isinstance(other, int) or isinstance(other, float):
        tmp = list()
        for i in self.l:
            tmp.append(i // other)
        return Vector(tmp)
    raise VectorException("We can only div a Vector by a Scalar")

As you can see, every overloaded method is a copy/paste of the previous with just small changes.

Solution:

What you want to do here is dynamically generate the methods. There are multiple ways to do this, from going super-dynamic and creating them on the fly in a metaclass’s __getattribute__ (although that doesn’t work for some special methods—see the docs)
to generating source text to save in a .py file that you can then import. But the simplest solution is to create them in the class definition, something like this:

def _make_op_method(op):
    def _op(self, other):
        if isinstance(other, int) or isinstance(other, float):
            tmp = list()
            for i in self.l:
                tmp.append(op(i. other))
            return Vector(tmp)
        raise VectorException("We can only {} a Vector by a scalar".format(
            op.__name__.strip('_'))
    _op.__name__ = op.__name__
    return _op

__mul__ = _make_op(operator.__mul__)
__truediv__ = _make_op(operator.__truediv__)
# and so on

You can get fancier and set _op.__doc__ to an appropriate docstring that you generate (see functools.wraps in the stdlib for some relevant code), and build __rmul__ and __imul__ the same way you build __mul__, and so on. And you can write a metaclass, class decorator, or function generator that wraps up some of the details if you’re going to be doing many variations of the same thing. But this is the basic idea.

The operator.mul, etc., come from the operator module in the stdlib—they’re just trivial functions where operator.__mul__(x, y) basically just calls x * y, and so on, made for when you need to pass around an operator expression as a function.

There are some examples of this kind of code in the stdlib—although far more examples of the related but much simpler __rmul__ = __mul__.

The key here is that there’s no difference between names you create with def and names you create by assigning with =. Either way, __mul__ becomes an attribute of the class, and its value is a function that does what you want.

If you don’t understand how that works, you probably shouldn’t be doing this, and should settle for Ramazan Polat’s answer. It’s not quite as compact, or as efficient, but it’s surely easier to understand.

Java: Sum two or more time series

I have multiple time series:

       x
|    date    | value |
| 2017-01-01 |   1   |
| 2017-01-05 |   4   |
|     ...    |  ...  |

       y
|    date    | value |
| 2017-01-03 |   3   |
| 2017-01-04 |   2   |
|     ...    |  ...  |

Frustratingly in my dataset there isn’t always a matching date in both series. For scenarios where there is one missing I want to use the last available date (or 0 if there isnt one).
e.g for 2017-01-03 I would use y=3 and x=1 (from the date before) to get output = 3 + 1 = 4

I have each timeseries in the form:

class Timeseries {
    List<Event> x = ...;
}

class Event {
    LocalDate date;
    Double value;
}

and have read them into a List<Timeseries> allSeries

I thought I might be able to sum them using streams

List<TimeSeries> allSeries = ...
Map<LocalDate, Double> byDate = allSeries.stream()
    .flatMap(s -> s.getEvents().stream())
.collect(Collectors.groupingBy(Event::getDate,Collectors.summingDouble(Event::getValue)));

But this wouldnt have my missing date logic I mentioned above.

How else could I achieve this? (It doesnt have to be by streams)

Solution:

I’d say you need to expand the Timeseries class for the appropriate query function.

class Timeseries {
    private SortedMap<LocalDate, Integer> eventValues = new TreeMap<>();
    private List<Event> eventList;

    public Timeseries(List<Event> events) {
        events.forEach(e -> eventValue.put(e.getDate(), e.getValue());
        eventList=new ArrayList(events);
    }
    public List<Event> getEvents() {
        return Collections.unmodifiableList(eventList);
    }

    public Integer getValueByDate(LocalDate date) {
        Integer value = eventValues.get(date);
        if (value == null) {
            // get values before the requested date
            SortedMap<LocalDate, Integer> head = eventValues.headMap(date);
            value = head.isEmpty()
                ? 0   // none before
                : head.get(head.lastKey());  // first before
        }
        return value;
    }
}

Then to merge

Map<LocalDate, Integer> values = new TreeMap<>();
List<LocalDate> allDates = allSeries.stream().flatMap(s -> s.getEvents().getDate())
    .distinct().collect(toList());

for (LocalDate date : allDates) {
    for (Timeseries series : allSeries) {
        values.merge(date, series.getValueByDate(date), Integer::ad);
    }
}

Edit: actually, the NavigableMap interface is even more useful in this case, it makes the missing data case

Integer value = eventValues.get(date);
if (value == null) {
    Entry<LocalDate, Integer> ceiling = eventValues.ceilingKey(date);
    value = ceiling != null ? eventValues.get(ceiling) : 0;
}

"array initializer needs an explicit target-type" – why?

Following JEP 286: Local-Variable Type Inference description

I am wondering, what the reason of introducing such a restriction, as:

Main.java:199: error: cannot infer type for local variable k

    var k = { 1 , 2 };
        ^   
(array initializer needs an explicit target-type)

So for me logically it should be:

var k = {1, 2}; // Infers int[]
var l = {1, 2L, 3}; // Infers long[]

Because Java compiler can already infer properly the type of an array:

void decide() {
    arr(1, 2, 3);  // call  void arr(int ...arr)
    arr(1, 2L, 3); // call  void arr(long ...arr)
}

void arr(int ...arr) {
}

void arr(long ...arr) {
}

So what is the impediment?

Solution:

Every time we improve the reach of type inference in Java, we get a spate of “but you could also infer this too, why don’t you?” (Or sometimes, less politely.)

Some general observations on designing type inference schemes:

  • Inference schemes will always have limits; there are always cases at the margin where we cannot infer an answer, or end up inferring something surprising. The harder we try to infer everything, the more likely we will infer surprising things. This is not always the best tradeoff.
  • It’s easy to cherry-pick examples of “but surely you can infer in this case.” But if such cases are very similar to other cases that do not have an obvious answer, we’ve just moved the problem around — “why does it work for X but not Y where X and Y are both Z?”
  • An inference scheme can always be made to handle incremental cases, but there is almost always collateral damage, either in the form of getting a worse result in other cases, increased instability (where seemingly unrelated changes can change the inferred type), or more complexity. You don’t want to optimize just for number of cases you can infer; you want to optimize also for an educated user’s ability to predict what will work and what will not. Drawing simpler lines (e.g., don’t bother to try to infer the type of array initializers) often is a win here.
  • Given that there are always limits, its often better to choose a smaller but better-defined target, because that simplifies the use model. (See related questions on “why can’t I use type inference for the return type of private methods. The answer is we could have done this, but the result would be a more complicated user model for small expressive benefit. We call this “poor return-on-complexity.”)