My Empty Mind

We Discuss

  •   Truth
  •   Products
  •   BigData
  •  Technology

Hot

Post Top Ad

LightBlog

Post Top Ad

Your Ad Spot

Next iPhone Release (September 2018) VS iPhone X: Features, Specification and Rumors

May 21, 2018 0

With September coming closure discussions and rumors has put iPhone in the spotlight again. This autumn Apple Inc. may come up with three to four new release of iPhone as per the January latest update.

There has been widespread speculation about the release of new iPhone, its name, price, specification and design. There also has been rumors that Apple will stop the “X” version as it was a special launch for the 10th anniversary of the smartphone and will now continue with names iPhone 9, 9+ and so on. Fans across the world are super excited for the new release which is to be unveiled in September 2018.
Apple may launch the successor of iPhone X as iPhone X+, iPhone XI or iPhone 9 but it will certainly be an advanced version in terms of future mobile technology. iPhone X was released with a price - starting at £999/$999 making iPhone X a bit costly to fit into a middle class budget, iPhone 8 and 8 Plus were priced at £699/$699 and £799/$799. There were narratives on the decrease in sale of iPhone X in Asia and Europe blaming on its hefty price so the new releases are expected to be of the same range if not less.

iPhone X is equipped with many future technologies like wireless charging, FACE ID, Animoji. The device was praised for the display, Animoji, build quality and camera but at the same time many fans are ambivalent about it because of the notch and Face ID(requires direct eye on screen). Face ID biometric unlocking system shows a dubious behavior when tested on identical twins. A Vietnamese firm Bkav announced in a blog that it had successfully created a mask that tricked Face ID unlocking system, it has also been challenged by hackers and mask makers. The new release is expected to fix the above issues.

As far as Camera is concerned, iPhone X camera has received positive feedback for working well in low-light, it has received a score of 97 putting it behind the Google Pixel 2(98) and Samsung's Galaxy S9+ smartphone(99) and into joint second place with the Huawei Mate 10 in the DxOMark mobile overall rankings. So there could be no significant improvement in terms of camera quality.


Some issue which were reported in iPhone X:

1. Difficult one handed use of iPhone X: when iPhone 5(4 inch)
was launched Apple came up with an advertisement showing a perfect size mobile phone which can be operated easily with one hand but after iPhone X (5.8 inch) it’s for sure that now they want everyone to forget that commercial


2. Screen becomes unresponsiveness and camera flash fails in cold weather However former has been fixed (quoted from Wikipedia)

    3. Multiple users have reported that the display often become unresponsive, it usually happens during the call and users are forced to restart the device.

Successors of iPhone X by whatsoever name it will be called but it is certainly expected to have features better than the existing one and hopefully it will address the issues which were reported in iPhone X. As per the latest update the new iPhone release is expected to have better battery, RAM, processors and display.

Camera:

iPhone X had the resolution 2436 x 1125(highest in the iPhone series) and pixel density 458 pixels per inch. It is least likely that Apple will go higher than 458 ppi for September 2018 release. However there are rumors that rear camera with a 3-shot lens will be introduced.

Price:

The new releases are expected to be of similar price range if not less. There has been widespread speculation about launch of a low budget cheaper model to attract customers. There was discussion on Apple sourcing the screen from LG rather than Samsung to get cheaper hardware components to cut down the final price of the device, rumors are also there about removal of wireless charging feature to bring the price further down.

Face Id and Touch Id:

img src:- https://pxhere.com/en/photo/1095120
iPhone Touch ID display is about placing your finger on the display to scan it, instead of the Home Button. According to the Investor Apple is planning to launch three new iPhones with Face ID this year

The company is doing researching in putting in-screen Touch ID fingerprint scanner into the display(filed a patent on Touch ID display in January 2013)


Batteries:

Often iPhone X is ranked below iPhone 8 and Samsung's Galaxy S8 and Note 8, due to shorter battery life. The next release of iPhone is expected to fix  the battery issue.

As reported on MacRumors
According to Kuo, iPhone X Plus is expected to have up to a 25 percent larger battery capacity of 3,300-3,400 mAh vs. iPhone X. 

Kuo adds that Apple has settled on a two-cell, L-shaped design for the second-generation iPhone X and iPhone X Plus battery, compared to a single-cell, L-shaped design that could have yielded 
up to 10 percent additional capacity.


RAM:

The iPhone X has 3GB RAM. According to Kuo iPhone X Plus will have an increased 4GB of RAM.

5G:

5G networks are expected to roll out in UK after 2020. iPhone X plus is not expected to have 5G support though we may see some basic compatibility for 5G services.

Design:

Successors of iPhone is expected to have a bezel-free design, Face ID camera, and no Home button. Facial recognition sensors can be embedded in the display itself according to Digital Trends.


Size:

As per Nikkei Apple is planning to launch both LCD handset and OLED(organic light-emitting diode display) handset. LCD version will be the cheaper. OLED handsets may be release in two sizes: one about 6.3 inches long and the other 5.8 inches.LCD phone will probably have a metal back and may not have wireless charging in order to cut down the cost.

According to reports MacRumors, there may be two LCD-based iPhones release in 2018. One of these LCD models can be 5.7in to 5.8in, and the other from 6.0in to 6.1in.

Color:

Along with black and silver Apple may release a gold color handset option. Photos of a gold iPhone X as posted by Ben Geskin appear to have leaked out, though it not reliable but it may be released in September 2018 as iPhone X plus.  

Image via Benjamin Geskin

What feature are you are expecting in the next generation iPhone? Let us know your views on Facebook or Twitter @MyemptymindC.


Read More

L2: Execution modes and Resource Managers of Spark

January 20, 2018 0
Spark has four modes of execution based on the resource manager and coordinator used for running spark job. They are:-
  1. Local Mode
  2. Standalone Mode
  3. Yarn Mode
  4. Mesos

Running Spark in Yarn is very common in Industry.

Read More

D1: PySpark - Capture bad records while loading a csv file in Spark Data Frame

January 15, 2018 3
Loading a csv file and capturing all the bad records is a very common task in ETL projects. The bad records are analyzed to take corrective or preventive measure for loading the file. In some cases, client may ask you to send the bad record file for their knowledge or action so it becomes very important to capture the bad records in these scenarios.
Most of the relational database loaders like sql loader or nzload provides this feature but when it comes to Hadoop and Spark (2.2.0) there is no direct solution for this.
However solution to this problem Is present in spark  Databricks Runtime 3.0 where you just need to provide the bad record path and bad record file will get saved there.

df = spark.read
  .option("badRecordsPath", "/data/badRecPath")
  .parquet("/input/parquetFile")

However, in the previous spark releases this method won’t work. We can achieve this in two ways :-
  1. Read file as RDD and then use the RDD transformation methods to filter the bad records
  2. Use spark.read.csv()


In this article we will see how we can capture bad records through spark.read.csv(). In order to load a file and capture bad records we need to perform the following steps:-

  1. Create schema (StructType) for the feed file to load with an extra column of string type(say bad_records) for corrupt records.
  2. Call method spark.read.csv() with all the required parameters. Pass the bad record column name (extra column created in step 1 as parameter columnNameOfCorruptRecord.
  3. Filter the records where “bad_records” is not null and save it as a temp file.
  4. Read the temporary file as csv (spark.read.csv) and pass the  same schema as above(step 1)
  5. From the bad dataframe Select “bad_column”.


Step 5 will give you a dataframe having all the bad records.

Code:-

>>> >>> >>>
#####################Create Schema#####################################
>>> customSchema = StructType(      [ \
                                StructField("order_number", IntegerType(), True), \
... ...                                 StructField("total", StringType(), True),\
...                                 StructField("bad_record", StringType(), True)\
...                             ]\
...                     )
“bad_record” is the bad record column.

#################Call spark.read.csv()####################
>>> orders_df = spark.read \
...                 .format('com.databricks.spark.csv') \
...                 .option("badRecordsPath", "/test/data/bad/")\
                .option("mode","PERMISSIVE")\
... ...                 .option("columnNameOfCorruptRecord", "bad_record")\
...                 .options(header='false', delimiter='|',) \
                .load('/test/data/test.csv',schema = customSchema)...

After calling spark.read.csv, If any record doesn’t satisfy the schema then null will be assigned to all the column and a concatenated value of all columns will be assigned to the bad record column.
>>> orders_df.show()
+-------------------+-------------------+-----------------------------+-----------------------------------------
|order_number|        total         |                  bad_record|
+-------------------+-------------------+-----------------------------+----------------------------------------
|                       1|                 1000|                                  null|
|                       2|                 4000|                                  null|
|                  null|                    null|                     A|30|3000|

Here all the records were bad_record is not null shows that these records violated the schema.

NOTE:-
Corrupt record columns are generated in run time when DataFrames instantiated and data is actually fetched (by calling any action).
Output of corrupt column depends on other columns which are a part of RDD in that particular ACTION call.
If error causing column is not a part of the ACTION call then bad_column wont show any bad record.
If you want to overcome this issue and want the bad_record to persist then follow step 3,4 and 5 or use caching.



Read More

Bitcoin : A simple explanation in layman terms

January 07, 2018 0
Bitcoin has become a buzzword now-a-days. There has been a remarkable surge in price of bitcoin from a few dollars in 2010 to over $19,000 each in the last couple of years, so whats so special in bitcoin??
Why Bitcoin and what so special about it?
  1. A currency which never reveals the identity of the owners, this reason makes bitcoin so popular, and this only is the reason for getting this currency banned in some countries 
  2. Bitcoin is not regulated by government or any bank, making it impossible for the government or any third party to control or manipulate it.
  3. New Bitcoin can be mined by anybody.


What is Bitcoin? 

https://upload.wikimedia.org/wikipedia/commons/c/c8/BTC_number_of_transactions_per_month.png
https://upload.wikimedia.org/wikipedia/commons/c/c8/BTC_number_of_transactions_per_month.png
Bitcoin is a decentralized concurrency that uses rules of cryptography for regulation and generation.  It has a market cap of 21 billion coins causing their production to decrease and making it more valuable with time. As of now more than half of bitcoins have been generated.

Bitcoin is very similar in certain aspects to the e-wallet which we have in our mobile. People keep money in online wallets like Paytm, Ola, payzaap, or any other mobile wallets for online shopping or for buying a service, same can be done with a bitcoins as well. You can also send bitcoins to someone you want, as if you are sending money to them. Though it has lots of similarity with other online wallet but it is very different from them in many aspects such as: -

  1. Common mobile wallets store money in terms of currency like rupees, dollars etc. But bitcoin is itself a unit.  
  2. Those currencies are government recognized, but Bitcoin is not regulated by any government or Bank. They bypass government and bank regulations.
  3. The bitcoin transaction is anonymous and secret, identity of people involved in transaction is not revealed.
  4. bitcoin wallet can be stored online or offline (in USB)

Some technical jargon used in the world of bitcoin: -

  • bitcoin: - A cryptocurrency.
  • Bitcoin: - The network and the software and the system which regulates manages and controls bitcoin.
  • Wallet: - A wallet is a small personal database that you store on your computer drive, on your smartphone, on your tablet, or somewhere in the cloud.
  • Block: - A bunch of transaction on the network,
  • Transaction: - Transfer of money from one wallet to another.
  • Block chain: It’s a ledger, a final entry or final summarized report. It is open to public and has the details of transaction. Network of computer running bitcoin software maintain these block chains . All bitcoin transaction are logged and made available to public. It records every transaction and the ownership of every bitcoin in network
  • Miners: - Control the network by verifying transaction, people who mine these coins are called miners. It can be mined by anyone who has a computer. bitcoin mining involves solving of complex mathematical problem. Miners ensure that the transaction is secure and is getting processed safely.

How to obtain bitcoins?

You can obtain bitcoin by the following three methods: -
  1. Purchasing it through a bitcoin exchange.
  2. Accept it as a payment of service or goods you offer.
  3. Mining new coins.
    
     "Mining" is a term used to refer the discovery of new bitcoins. Mining process is simply the verification of bitcoin transactions happening across the Bitcoin network.

Suppose you buy a book, a product or a service from an online store which accepts bitcoin and you pay the money in bitcoin. To check the authenticity of the bitcoin, miners begin to verify the transaction. All the transactions are grouped into boxes with a virtual lock on these boxes called "block chains."

Miners run software to find the key that will open that virtual lock. If the key is found the transactions are verified. The current number of attempts to find the correct key is 1,789,546,951.05, according to Blockchain.info—a top site for the real-time bitcoin transactions. Miner gets a reward of newly generated bitcoins (perhaps 12.5 bitcoins) for finding the key.Every 210,000 blocks, or, roughly, every four years, the block reward is halved. It started at 50 Bitcoin per block in 2009, and in 2014 it was halved to 25 Bitcoins per block.

And as I said bitcoins can be mined by anybody, to do so you just need powerful computation engines with top quality hardware’s with that you can pitch into the Bitcoin to verify the transactions by doing complex mathematical computation to find the right key for the block. When any one miner succeeds in solving their math problem, they get to create a new block and receive a certain number of Bitcoins as a reward, known as “the block reward.” If You don’t want to invest much on purchasing new powerful machine, then you can join the network by adding your computer to the mining pool Pools are a collective group of bitcoin miners who pool their computer to mine bitcoin. Sites such as Bitcoin.com, BTC.com Slush’s Pool allow small miners to receive portion of bitcoins if they add their computer to the group

However, In early years of bitcoin mining with personal computers was possible. Now, the network is very competitive so using specialized hardware is the only way to earn.
Many online wallets are available on internet where you don't have to maintain the bitcoin software on your devise though you can also download the software and manage it locally in your computer of device.

From where to buy bitcoins: -
There are many online bitcoin exchanges available where you can open your bitcoin Wallet account and start doing transactions. Zebpay is one of the android mobile bitcoin wallet. You can use the following :- 
my referral code- REF30118675
or link- http://link.zebpay.com/ref/REF30118675
to get a free bitcoin wallet and earn free bitcoin worth Rs. 100.


Read More

L1: Introduction to Apache Spark

January 07, 2018 0
Apache spark is a Scheduling Monitoring and Distribution engine which does lightning fast fault tolerant in-memory* parallel processing of data. It came out of the Apmlab project UC Berkeley. Apache Spark was developed as a unified engine to meet all the needs of a big data processing. 

Spark core uses both memory and disk while processing the data, it had 4 traditional APIs, Scala, Java, Python and R(experimental phase) but now it also has a new one called data-frame API(introduced in 1.3). Around Spark core there are high level libraries like SparkSQL, GraphX, Streaming, MLlib etc.

Spark has four modes of execution based on the resource manager and coordinator used for running spark job. They are:-
  1. Local Mode
  2. Standalone Mode
  3. Yarn Mode
  4. Mesos
Running Spark in Yarn is very common in Industry.
To know more on Resource Managers and Execution modes in Spark click L2: Resource Managers and different execution modes of Apache Spark.


Read More

L3: Python or Scala? Which one to choose for Apache Spark?

December 25, 2017 0

When I started learning Spark I was not sure which language to choose Python or Scala? I was a PL-SQL developer, Python and Scala both were a new language for me and I was not even aware of the market trend and market requirement on Apache Spark. I started asking people around me and spend considerable amount on time on google to know which language to choose for Apache Spark. 
Finally I came to a conclusion and wanted to share it with all who are beginner in Spark or who confused on which language to choose. My analysis is based on my own experience and talking to people in Industry from India and US. Without further ado here is my few cents which will help you in deciding “which language to choose for Apache Spark”.

Popular language used in Industry for data analysis: -
Dominance of python in areas like data science, machine learning, deep learning is unparalleled. Python is very popular amount data scientists and because of tons of libraries its really hard to beat Python in Data analysis. 

The hottest technology TenserFlow is written in Python. Python is used in broad range of scenarios e.g. scientific and Numeric, Machine learning, software application and business application development, data mining, cross platform development, RAD (Rapid Application development). 

So Learning python will broaden the scope of your career.

Performance of PySpark :-
In Spark version 1.0.X we only had RDDs to work on, but with Spark 2.0.X we have the power of DataFrame. Using DataFrames the runtime performance of running a job in Spark using Python or Scala is same, Scala and Python DataFrames are compiled into JVM bytecodes so there is negligible performance difference. Python DataFrame operations are 5 times faster than Python RDD operations   


However in actual project you may sometime need to work on RDDs but that can be easily handled.

Reluctance of ETL Developers for Scala: -
With the increase in popularity of Hadoop and trust which it is building in delivering a powerful, reliable and cheaper data processing solution most of the big industry players are now thinking of implementation or re-platforming of the existing ETL projects to Hadoop. Data Analysis projects has lots of ETL jobs to process and load the data to the data marts or warehouse. Most of the ETL projects I came across had lots of Shell, Perl or other scripts for the jobs. In last few years there is a swing towards Python. Python is not only the dominant alternative to Perl and shell scripts but is also  a powerful language

People in Industry think of Python a better solution over Perl and Shell script because of its presence, power, rich community and steep learning curve.   

Ease of Learning and Productivity Graph: -
Python is both functional and object oriented which makes it both easier and robust. For a person of PLSQL background Python will certainly be his choice. Python is easy to learn and has a steady learning curve as compared to other programming languages which have a very steep learning curve.
Ease of development is also there because of presence of wide python community.

Its really Easy!!!!
The only thing you need to start Python is just start coding in Python and a browser tab to do google search.

Mastering Spark? Is this what you want? :-
Spark is written in Scala so if you know Scala it will let you understand and modify Spark internal code. Since Big Data is still evolving you will encounter many use cases where there is no direct solution available, to achieve that you will either have to choose a tedious way of achieving it or understand the spark internal and modify it if required to fit your use case.
A good example of this scenario is reading csv in spark through read.csv() and capturing all the bad records of a csv along with error message, record number and bad column value. In Spark 2.0 there is no straight way to do this. (solution to this scenario is explained in other post)

If you come across any bug in spark code you can fix only if you know Scala. e.g. DataFrameWriter.saveAsTable issue with hive format to create partitioned table

So if you want to master Spark you  will have to know Scala.

Conclusion: -
1.       If you are a beginner and you don’t have specific requirement to learn a particular language then go for Python, Python is easy, it has a steady learning curve and so will be your spark learning. You will be good spark developer in very less time with Python. Once this is done you will be in a good position to decide whether to go for Scala or you are happy with your career in PySpark.

2.      If you know python, companies working on data science (with spark), biotech software will certainly prefer you.

3.      I see a growing trend of migration of ETL projects from other languages (perl ,shell) to Python, so it will be good to choose Python at this point.

4.      It’s really easy, no extra efforts required. The only thing you need is just start coding in Python and a browser tab to do google search.
Read More

Java: Singleton Design Pattern

November 24, 2017 0

Singleton design pattern belongs to creational design pattern family. this pattern is used to create object. In this pattern only one object is created across java virtual machine (JVM) and this object can be used by all classes.
There are many ways we can implement this pattern.

1) Eager Initialization: In this method of creation, Object of class is created when it is loaded to memory by JVM. this can be done by assigning instance to reference variable directly.

 // Java code to create singleton class by   
 // Eager Initialization  
 public class SingletonTest   
 {  
  // public instance initialized when loading the class  
  public static SingletonTest obj = new SingletonTest();  
  private SingletonTest()  
  {  
   // code for private constructor  
  }  
 }  

2) Using Static Block : this is same as Eagar Initialization , the only difference is Object is created in static block. it will help to handle exception , if any occured.

 // Java code to create singleton class  
 // Using Static block  
 public class SingletonTest   
 {  
  // public instance  
  public static SingletonTest obj;  
  private SingletonTest()   
  {  
   // private constructor  
  }  
  {  
   // static block to initialize instance  
   obj = new SingletonTest();  
  }  
 }  






Read More

Java Memory Management

November 11, 2017 0

Java provide excellent feature called garbage collection, it allows developer to create object without any worries.

Java only take care of memory allocation and de-allocation (In C/C++ developer has to take care of Object memory allocation and memory de-allocation). 
important work of Garbage Collection is to free unwanted Object space.



As you see the below image JVM memory is divided different part, At high level it is divided in two major parts
  1. Young generation
  2.  Old generation

Young generation - Young generation is area where all new objects are created. When this area is filled garbage collection is performed.this garbage collection is called Minor GC. young generation is further divided in to different parts.
  • Eden Memory  
  • Survivor memory spaces  (S1 and S0 as shown in Above Image)
How garbage collection works in Eden Memory?
Most of newly created objects are created in Eden memory space. When Eden space is filled with newly created objects then minor GC is performed and all objects are moved to one of Survivor space, at same time minor GC also check survivor space and available objects move them to Other survivor space.
Objects which are Lived after many cycles of GC, are moved to Old generation memory space.

How garbage collection works in Old Generation Memory?
Old generation memory contains objects which are long lived and survived after many cycles of Minor GC. when Old generation memory space is full with Objects, then garbage collection is performed, this garbage collection is called Major GC. There is one drawback of Major GC, when garbage collection is performed 
all threads are stopped until operation completes.

Permanent generation
permanent generation or 'Perm Gen' contains application metadata required by JVM to explain methods and classed used in application. 
Perm gen is filled by JVM at run time.

Read More