Java8 group a list of lists to map

I have a Model and a Property class with the following signatures:

public class Property {

    public String name;

    public String getName() {
        return name;

    public void setName(String name) { = name;

public class Model {

    private List<Property> properties = new ArrayList<>();

    public List<Property> getProperties() {
        return properties;

I want a Map<String, Set<Model>> from a List<Model> where the key would be the name from the Property class. How can I can I use java8 streams to group that list by its Properyes’ name? All Propertyes are unique by name.

It is possible to solve in a single stream or should I split it somehow or go for the classical solution?

          .flatMap(model -> model.getProperties().stream()
                  .map(property -> new AbstractMap.SimpleEntry<>(model, property.getName())))

Java: Naming convention for plural acronyms

I know there had already been similar discussions on such naming conventions. However, I’m having problem with plural acronyms.

public List<Disc> findAllDvds(DiscHolder holder) {}
public List<Disc> findAllDvd(DiscHolder holder) {}

Assuming that I have decided to use CamelCase for acronyms, which of the two is generally more acceptable?


I am aware this will invite opinion-based answers, but sometimes when you are in doubt, you just need people to give advises and feedbacks.

To add on, the confusing part here is that findAllDvds can imply a new acronym DVDS, and it can be considered confusing.


The first (findAllDvds). The second (findAllDvd) is simply incorrect, “all” implies more than one, but “Dvd” is singular in English.

Re your edit:

the confusing part here is that findAllDvds can imply a new acronym DVDS, and it can be considered confusing

Since the “all” implies multiple, the “s” on “Dvds” reads as a plural, not part of the acronym. If it really were DVDS, the name would be findAllDvdss or similar.

It’s said that in computer science, there are three hard problems: Cache invalidation, and naming things. (Off-by-one errors are just common, not hard.)

Jenkins DSL using the Build Blocker Plugin plugin

I am writing a Jenkins DSL script (groovy) that will create a Jenkins Job. One of the options I would like the job to have enabled is the box that reads “Block build if certain jobs are running”

I tried to use the “blockOn” code that I found here:

But when I run my DSL script the job gets created and does NOT have the “Block build if certain jobs are running” box checked

Below is the entire DSL script that gets executed:

job('Testing-DSL') {
  blockOn(['2.Dummy_job', '1 .Dummy_job']) {
} //closing blockOn section

This is just a template job that I use to reference how groovy code should look<br>

logRotator(-1, 30, -1, -1)

parameters {
    choiceParam('CHOICE1', ['choice_option1', 'option2'], 'Some description for this param')
    stringParam('STRING1', 'Default_Value_string1', 'Some description for this option')

} //closing parameters section

    steps {
echo $CHOICE1
echo $STRING1

} //closing steps section
} //closing job section


Your script is working for me, the “Block build if certain jobs are running” box is checked.

You may need to restart Jenkins before using Job DSL if you installed some plugins. See also

Exceeding Amazon AWS Free Tier

I recently set up an Amazon Free Tier account to store some databases. However, I was stupid enough to not pay attention to the limit of 750 hours per month and created too many instances. This month I will definitely run over the 750 hours. However, my question is the following.

Once I exceeded the free tier limits in one month, would I fall out of the free tier entirely? Or would I still have the free 750 hours once the next month starts (until the end of one year from the creation of the acc)?


As per the Hourly Usage in the Free Tier documentation, the free tier has a limit of 750 hours per month.

So in your case, you will get 750 hours back on the next month of the billing cycle.

Importing MYSQL table to RDS from a csv file

I have been searching all over the web and have been unable to find a solution to the following issue.
I need to automate a simple CSV file import into an RDS MySQL database. Normally I would push or pull this file to the MySQL database and then perform a “LOAD DATA INFILE” but I do not have access to the command line of the RDS box. The second option is to run mysqlimport, but again I do not know how to initiate this from the RDS box. I Googled for hours and cannot find an adequate answer. Any help is appreciated. Thanks in advance.


  1. Download MySql Utilities to install mysqlimport on your local machine.
  2. Run mysqlimport with –host
mysqlimport --local \
        --compress \
        --user=username \
        --password \
        --host=hostname \
        --fields-terminated-by=',' Acme sales.part_*
--host - RDS endpoint (mysql– 
--username - Your RDS Username
--password - Your RDS Password

Ref: Importing Data From Any Source to a MySQL

./path of file to execute not executing

I am trying to execute matlab from desktop path of file is


executing ./usr/local/MATLAB/R2017b/bin/matlab

also tried .//usr/local/MATLAB/R2017b/bin/matlab

and ./ /usr/local/MATLAB/R2017b/bin/matlab
how it works?


Just run /usr/local/MATLAB/R2017b/bin/matlab to access the binary via the full path otherwise you will run try to run it via the relative path: <CURRENT DIR>/usr/local/MATLAB/R2017b/bin/matlab if you put a . before.

You can also change the add /usr/local/MATLAB/R2017b/bin/ to your path variable in order to be able to execute the command matlab without having to specify its whole path each time.

Also change your ~/.bashrc file and add PATH=$PATH:/usr/local/MATLAB/R2017b/bin to be able to keep those change after a reboot and just run matlab

How can I do full outer join on multiple csv files (Linux or Scala)?

I have 620 csv files and they have different columns and data. For example:

word, count1
w1, 100
w2, 200

word, count2
w1, 12
w5, 22

//Similarly fileN.csv
word, countN
w7, 17
w2, 28

My expected output

word, count1, count2, countN
w1,    100,     12,    null
w2,    200 ,   null,    28  
w5,    null,    22,    null
w7,    null,   null,    17

I was able to do it in Scala for two files like this where df1 is file1.csv and df2 is file2.csv:

df1.join(df2, Seq("word"),"fullouter").show()

I need any solution, either in Scala or Linux command to do this.


Using Spark you can read all your files as a Dataframe and store it in a List[Dataframe]. After that you can apply reduce on that List for joining all the dataframes together. Following is the code using three Dataframes but you can extend and use same for all your files.

//create all three dummy DFs
val df1 = sc.parallelize(Seq(("w1", 100), ("w2", 200))).toDF("word", "count1")
val df2 = sc.parallelize(Seq(("w1", 12), ("w5", 22))).toDF("word", "count2")
val df3 = sc.parallelize(Seq(("w7", 17), ("w2", 28))).toDF("word", "count3")

//store all DFs in a list
val dfList: List[DataFrame] = List(df1, df2, df3)

//apply reduce function to join them together
val joinedDF = dfList.reduce((a, b) => a.join(b, Seq("word"), "fullouter"))
//|  w1|   100|    12|  null|
//|  w2|   200|  null|    28|
//|  w5|  null|    22|  null|
//|  w7|  null|  null|    17|

//To write to CSV file
  .option("header", "true")

This is how you can read all your files and store it in a List

//declare a ListBuffer to store all DFs
import scala.collection.mutable.ListBuffer
val dfList = ListBuffer[DataFrame]()

(1 to 620).foreach(x=>{
  val df: DataFrame =
    .option("header", "true")
    .load(BASE_PATH + s"file$x.csv")

  dfList += df