Git and GitHub Part-2

Clone Operation

1st will copy the repository path, then will clone to desired location.

git clone

That creates a directory named Data-analytics-(at your current local file system location), initializes a .git directory inside it, pulls down all the data for that repository, and checks out a working copy of the latest version. If you go into the newly created Data-analytics-directory, you’ll see the project files in there, ready to be worked on or used. Continue reading

Advertisements

Introduction to Git and GitHub Part-1

What is git?

It is a distributed source control system. Due to the distributed in nature it can scale massively.Creator of git Linus Torvalds  want to create a version control system that could handle the requirement of linux kernel project. Today the linux kernel project 50milion lines of code(LOC) with 1200 worldwide developer contributing to the project. Another key benefits of git most of the operations are local there are few commands that require network connection otherwise you can completely work disconnected. So it is very fast. It is free and open source. Git have very active community and there are many resources available onine. Easy to find developers who have already have experience in git. Because off all these factor git is most popular VCS. Git de facto standard means it enjoy wided option for integration into other tool used by developer community including text editor, bug tracking system and build server. Continue reading

Step by Step Process for Python Virtual Environments with Anaconda Distribution

Python Virtual Environments with Anaconda Distribution:

Virtual environment is a way you can separate different python environment for different project, so why you will do like this for example you have multiple project and they all relay on a single package flask or django each of this project may be using different version of django and flask. Then if you upgrade that package then it could break couple of web site, it will be better if each of this project had an isolated environment where they had only dependency and packages they need and specific version they need that’s what virtual environment do.

Continue reading

Power BI for For Beginners Part-1

During this series you can learn how to use power bi to connect the data analyze and model that data to bring the insights and track the data that changes over time how you can make better business decision.

Power bi have: Desktop Application which allows you to connect the data model it analyze and enrich it then visualize that data. There are mobile application and there are programmatic approach you can handle the data.

Will start with power bi desktop, connect to excel spreed sheet data pull data and build some visualization then how we can publish that to powerbi.com service and build some dashboard from that share and collaborate with organization. Continue reading

Facebook Data Analytics with Python Part-1

In this series of blog i will explain how you can use Graph API using python to make simple request and make some data analysis with it.

So what is Graph API?

It is a API developed by facebook so you can use it to gate data from facebook for example data from pages, groups or even on your personal pages. Facebook contain huge amount of valuable data that you want as a data analyst ,scientist or a researcher we will use python and panda for this series.

We will make some simple example on pages and groups and gets things like what are more interesting posts or comments and when hope you will enjoy. Continue reading

Linear Regression in Microsoft Azure Cloud End to End Flow

What is Regression?

In simple word it’s a relationship!! exactly what we have in facebook but here it is not between two human beings between two set of numbers.

Examples: Sales ~ price

what will happen to sales if price increase/decrease. If price goes up sales goes down kind of relationship most of us know. We know it from our past experience. But if i will tell you neurofibromin ~ helix-loop-helix(HLH) what will happen if i increase HLH what will going to happen with neurofibromin. (This is what most of us don’t know from our past experience).Regression will tell us what is the relation ship between sales ~ price or neurofibromin ~ helix-loop-helix(HLH) if one increases what happen to others that is the relationship what we can find out. Continue reading

Comparative Analysis On Azure Data Store

Azure Storage: 

Table: A NoSQL key-value store for rapid development using massive semi-structured dataset. Highly scalable to PBs, and has dynamic scaling based on load. Has fast key/value lookups. We can consider this one for alternative for relational DB which is highly scalable and schema less.

Queue: When applications absorb unexpected traffic bursts and can prevent servers from being overwhelmed by a sudden flood of requests.  Instead of getting dropped, incoming requests are buffered in the queue until servers catch up so traffic bursts don’t take down your applications.  Continue reading

Getting Started with Azure Data Lake for Analytics

What is Azure Data Lake:

It is a new flavor of Azure Blob Storage which can handle streaming data (low latency, high volume, short updates), data-locality aware and allows individual files to be sized at petabyte scale. It a basically is a HDFS as a service. We can store all type of data here (structure data like relational DB, unstructured like logs, and semi structure data like json and xml file). While storing data inside azure data lake no need to define the schema. It only support schema on read.

Flow:

Continue reading

Comparative difference between partitioning and bucketing in hive

This is one of the most common question i found peoples are getting confused. I have decided why not to write a simple explanation about this.

Usually Partitioning in hive offers a way of segregating hive table data into multiple files/directory’s. But partitioning gives effective results when,

  • There are limited number of partitions
  • Comparatively equal sized partitions

But this may not possible in all scenarios, like when are partitioning our tables based geographic locations like country, some bigger countries will have large partitions(ex: 4-5 countries itself contributing 70-80% of total data) where as small countries data will create small partitions (remaining all countries in the world may contribute to just 20-30% of total data).So, In these cases Partitioning will not be ideal. Continue reading