Posted on

How to Crack Zoho Interview [Updated 2019]

Zoho’s Interview Process :

There are five rounds in Zoho Interview process :

1.First Round: (Aptitude written round)
2.Second Round: (Normal Programming round)
3.Third Round: (Advanced Programming Round)
4.Fourth Round: (Technical Round)
5.Fifth Round: (HR round)

Here are sample question format and topics in which questions are probably asked :

1.First Round: (Aptitude, written round)

This round consisted of two sections “Reasoning Aptitude” and “Technical Aptitude”. Reasoning section was more like puzzles so do concentrate on logical puzzles. Technical Aptitude dealt more with “operator precedence”, “pointers”, “iterations”, “dynamic memory allocations”.

2.Second Round: (Normal Programming round)

1.Print the word with odd letters as


  1. Given a set of numbers like <10, 36, 54,89,12> we want to find the sum of weights based on the following conditions
    1. 5 if a perfect square
    2. 4 if multiple of 4 and divisible by 6
    3. 3 if even number

And sort the numbers based on the weight and print it as follows

<10,its_weight>,<36,its weight><89,its weight>

Should display the numbers based on increasing order.

  1. Save the string “WELCOMETOZOHOCORPORATION” in a two-dimensional array and search for a substring like “too” in the two-dimensional string both from left to right and from top to bottom.


And print the start and ending index as

Start index : <1,2>

End index: <3, 2>

  1. Given a 9×9 Sudoku, we have to evaluate it for its correctness. We have to check both the submatrix correctness and the whole sudoku correctness.

5.Given a two dimensional array of string like

<”luke”, “shaw”> <”wayne”, “rooney”> <”rooney”, “ronaldo”> <”shaw”, “rooney”>

Where the first string is “child”, the second string is “Father”. And given “ronaldo” we have to find his no of grandchildren Here “ronaldo” has 2 grandchildren. So our output should be 2.

Third Round: (Advanced Programming Round)

Here they asked us to create a “Railway reservation system” and gave us 4 modules. The modules were: 1. Booking 2. Availability checking 3. Cancellation 4. Prepare chart We were asked to create the modules for representing each data first and to continue with the implementation phase.

Fourth Round: (Technical Round)

Technical questions which are revolved around “data structures” and “OOPS”

Fifth Round: (HR round)

Some general hr questions asked mainly about our projects and about certifications if we had one.

To Access some more Practice Problems – Click here

To Master Data Structures – Checkout this Playlist Youtube

Also Checkout GUVI’s Computational Thinking course to master Algorithms and Data Structures.Click here

Content Source –

Posted on

Top 10 Simple Creative Project Ideas [2019]

Here’s a list of simple yet creative Project Ideas of 2019 :

Project Ideas

  1. Creating your Own Search Engine.Click here to see a list of Sample Submissions

  2. URL Shortener Sample Projects

  3. A Twitter Client. Sample Projects

  4. CRUD app To manage a hospital with doctors,patients and Appointments Sample Projects

  5. Make a list that would automatically play your Favorite youtube Songs.Sample Projects

  6. To-do List with an Email Reminder. Sample Projects

  7. Weather App Sample Projects

  8. Credit Card Fraud Detection. Sample Projects

  9. Youtube Trending Video Statistics. Sample Projects

  10. Exercise Tracker:

    Exercise Tracker is a revolutionary app for the fitness buff. Exercise Tracker uses the built-in sensors of Android Wear and Apple Watch to automatically detect exercises and count repetitions. This product offers apps for iOS, Apple Watch, Android, and Android Wear.The companion phone app allows you to track your exercise pattern and display the summary of the workouts done and calories burnt. It should have a food recommendation page for the user. Design pages and flows for the above. Sample Projects

  11. Meeting Scheduling Web App Sample Projects


    This project depicts a fault tolerant banking system. It is composed of three main entities: an ATM, a Consortium and a Bank. The execution flows starting from the ATM, which creates a request message to be forwarded to the Bank through the Consortium. It done using a open source computing technology namely remote method invocation(RMI). Sample Projects

  13. Smart Parking System. Sample Projects

If you are interested in Mini Projects, Checkout the Mini Project Ideas Below :Top 30 Mini Project Ideas for Students

Posted on

Celebrate the Big Data Problems – #3

How to have our basic statistics (Mean, Median, SD, Var, Cor, Cov) computed using R language?

The dataottam team has come up with blog sharing initiative called “Celebrate the Big Data Problems”. In this series of blogs we will share our big data problems using CPS (Context, Problem, Solutions) Framework.


In statistics Mean, Median, Standard Deviations, Variance, Correlation, or Covariance are foundations steps. From Data Analyst to Data Scientist they will use the basic statistics. It can be arrived using many languages. But here we will use the language called R.

Mean – The mean is the average of the numbers. And it’s easy to calculate; add up all the numbers, then divide by how many numbers there are. In another words it is the sum divided by the count.

Median – The median is the middle of a sorted list of numbers. To find the median, place the numbers in value order and find the middle.

Standard Deviation – SD is a measure of how spreads out numbers are. And the symbol for SD is sigma, a greek letter.

Variance – The Variance is a measure of how spread out numbers are and it is the average of the squared difrences from mean.

Correlation- The Corereltion is when two sets of data are strongly linked together we say they have a high correlation. Correlation is positive when the values increase together, and it is negative when one value decreases as the other increases.

Covariance – covariance is a measure of how much two random variables change together.


How to have our basic statistics Mean, Median, Standard Deviation, Variance, Correlation, and Covariance are computed using R language.


Use the below functions as applies with assumptions of x and y are vectors.

  • mean(x) median(x) sd(x) var(x) cor(x,y) cov(x,y)

Continue Reading …

Posted on

Celebrate the Big Data Problems – #2

Celebrate the Big Data Problems – #2

How to identify the no of buckets for a Hive table while executing the HiveQL DDLs ?

The dataottam team has come up with blog sharing initiative called “Celebrate the Big Data Problems”. In this series of blogs we will share our big data problems using CPS (Context, Problem, Solutions) Framework.


Bucketing is another technique for decomposing data sets into more manageable parts. For example, suppose a table using the date as the top-level partition and the employee_id as the second-level partition leads to too many small partitions. Instead, if we bucket the employee table and use employee_id as the bucketing column, the value of this column will be hashed by a user-defined number into buckets. Records with the same employee_id will always be stored in the same bucket. But the challenges are to identify the no of buckets for certain Hive tables in the big data system. While creating table you can specify like CLUSTERED BY (employee_id) INTO XX BUCKETS; where XX is the number of buckets. Bucketing has several advantages. The number of buckets is fixed so it does not fluctuate with data. If two tables are bucketed by employee_id, Hive can create a logically correct sampling. Bucketing also aids in doing efficient map-side joins etc.


How to identify the no of buckets for a Hive table while executing the HiveQL DDLs ?


To identify the buckets we need to do a small exercise as below steps,

We need to get the daily / run-wise records from the business, vertical, or domains.
Convert into average percentage of increment by taking at-least five days’ data, or a week data.
Multiple the incremental percenateg with 1024 to have it in incremental size in megabytes
Divide it by 192 or 128 for RCFile and HiveIO respectively
Formulae :
Incremental Records = Total Records / Incremental Records
Incremental Records % of Total = (Incremental Records / Total Records) * 100
Incremental Size in MB = Incremental Records % of Total * 1024
of Buckets = Incremental size in MB / 192 for RCFile
of Buckets = Incremental size in MB / 128 for HiveIO
Reason for conversion in to MB is that Hadoop has it’s file storage blocks in to MB for large blocking.
Example :
If we have initially 100 Records and average increment with 5 Records per run/day wise and we are interested in using RCFile
Incremental Records = 100 / 5 = 20
Incremental Records % of Total = (20 /100 ) * 100 = 20%
Incremental Size in MB = 20 * 1024 = 20480

Continue Reading …

Posted on

9 Key Benefits of Data Lake !

A Data Lake has flexible definition, to make this statement true the dataottam team took initiative and released a eBook called “The Collective Definition of Data Lake by Big Data Community”, which contains many definitions from various business savvy and technologist.

And in nutshell Data Lake is a data store and processing data system, where an organization can place internal data, external data, partner’s data, competitor data, business process, social data, and people data. Data Lake is not Hadoop. And it leverages the Store-All principle of data. Data Lake is scientist preferred data factory.

Scalability – It is the capability of a data system, network, or process to handle a growing amount of data or its potential to be enlarged in order to accommodate that data growth. One of the horizontal scalability tools is Hadoop, which leverages the HDFS storage.

Converge All Data Sources – Hadoop powered to store the multi structured data from diverse set of sources. In simple words the Data Lake has ability to store logs, XML, multimedia, sensor data, binary, social data, chat, and people data.

Accommodate High Speed Data – In order to have the high speed data in the Data Lake, it should use few of the tools like Chukwa, Scribe, Kafka, and Flume which can acquire and queue the high speed data. By leveraging this high speed data can integrate with the historical data to have its fullest insights.

Implant the Schema – To have insights and intelligence from the data, which is stored in the Data Lake we should implant the schema for the data and make the data flow in analytical system. And the data lake can able to leverage both structured and unstructured data.

AS-IS Data Format – In legacy data system the data is modeled as cubes at the time of data ingestions or ingress. But in the data lake removes the need for data modeling at the time of ingestion; we can do it in the time of consuming. It offers unmatched flexibility to ask any business, domain questions and to seek insights and intelligence answers.

The Schema – The traditional data warehouse will not support schema less storage. But the Data Lake leverages the Hadoop simplicity to store the data based on schema less write and schema based read mode, which is very much useful at the time of data consumptions.

The favorite SQL – Once the data is ingresses, cleansed, and stored in a structured SQL storage of the Data Lake, we can reuse the existing PL-SQL/DB2 SQL scripts. The tools such as HAWQ, Impala, Hive, and Cascading gives us the flexibility to run massively parallel SQL queries while simultaneously integrating with advanced algorithm libraries such as MLlib, MADLib and applications such as SAS. Performing the SQL processing inside the Data Lake decreases the time to achieving results and also consumes far less resources than performing SQL processing outside of it.

Advanced Analytics: Unlike a data warehouse, the Data Lake excels at utilizing the availability of large quantities of coherent data along with deep learning algorithms to recognize items of interest that will power real-time decision analytics.

Tradional vs Big Data Data Lake

Continue Reading…

Posted on

Self-Learn Yourself Apache Spark in 21 Blogs – #1

We have received many requests from friends who are constantly reading our blogs to provide them a complete guide to sparkle in Apache Spark. So here we have come up with learning initiative called “Self-Learn Yourself Apache Spark in 21 Blogs”.

We have drilled down various sources and archives to provide a perfect learning path for you to understand and excel in Apache Spark. These 21 blogs which will be written over a course of time will be a complete guide for you to understand and work on Apache Spark quickly and efficiently.

We wish you all a Happy New Year 2016 and start the year with rich knowledge. From dataottam we wish you good luck to “ROCK Apache Spark & the New Year 2016”

Please subscribe to our blogs at, to keep you trendy and for future reads on Big Data, Analytics, and IoT.

Blog 1 – Introduction to Big Data

Best wishes to you this holiday, and Happy New Year, from all of us at dataottam.

Assume that you’re preparing for 50 most popular and best books in the big data space to purchase for your college library from around the world. When we do a web search like “good and best books for big data”, we will land up with many and many multiple pages of results including various ppt’s, pdf’s, pics, and more. And even we could see the links to social media like google+, facebook, LinkedIn, twitter, and more.

So now the game beings how do we decide what is most applicable to our need? Due to time constraint we can able to go through every link, so now our big data problem starts. And assume that you have friend who can able to analyze all this listed data and share you with the just information what you need.

So now let’s learn what Big Data is and what is role and dimensions of Big Data in enterprise. Now let’s consider this a LinkedIn post gets 200 likes, and 10+ comments per day and there are many in the same line, hence the data generation is very huge which is unimaginable or non measureable with the legacy data base systems. And the collection of large amount of data is referred to big data.

And the big data can come from Internal Data sources, External Data Sources, LinkedIn, Google, Facebook, Twitter, and Personal Devices. And if all these data are sorted, filtered, and analyzed will be producing insight full information which could key pointers for the enterprise to make the business decisions.

Continue Reading…

Posted on

Celebrate the Big Data Problems – #1

Celebrate the Big Data Problems – #1

How we can replace a special or required delimiters during Hive import or ingress from the relation database.

Daily we are facing many big data problems in production, PoC, and more perspective. Do we have any common repo to collect and share?  No, as we know we don’t have any. As always dataottam is looking forward to share the learnings with community to celebrate their similar, same kind of problems.  And also if you have any new kind of big data problem, we can jointly debate and experiment to celebrate our big data problem.

So we, dataottam have come up with blog sharing initiative called “Celebrate the Big Data Problems”. In this series of blogs we will share our big data problems using CPS (Context, Problem, Solutions) Framework.


Whether we are moving a small collection of selfies between apps or moving very large data sets remains a challenge. So Hadoop is one of the big data problem solvers, but transferring data to and from relation databases is still remains challenge post Hadoop stands. Hence SQL to Hadoop – Sqoop was created to perform bidirectional data transfer between Hadoop and all other external structured data sources.


How we can replace a special or required delimiters during Hive import or ingress from the relation database.


If we use – -hive-import options to import the data and selects the record count in the destination to check we will find more records than the source due to their delimiters.

So we can instruct Sqoop to automatically clean our data using – – hive-drop-import-delims*, which will remove all 1, \t, and \n characters from all string based columns. And also we can replace the special character using – -hive-delims-replacement**.

Sqoop import \

–connect  jdbc:mysql:// \

–username dataottam \

–password dataottam \

  Continue Reading…

Posted on

Attitudes of a Great Software Developer!!

Welcome to GUVI blogs.  Today the topic is about the attitudes of a Great Software Developer.
Software development is an art, not just a science.  You can learn all the technicalities of software development, but you need to be absolutely passionate about coding and perceive it as an art to be extremely good at it.  If you are one such person, I will introduce you to the journey of becoming a “Great Developer”.  The objective of a Great Developer, as i name him/her is to make his/her art as beautiful as possible and make it the best.
In my own thoughts, I will share some attitudes which a great developer should have apart from the general expectations of being technically and analytically sound, understanding requirements in detail, good design skills, etc.
Image Courtesy:

Attitude #1 –  A bug is a question of my ability to write good code

Fixing bugs is part and parcel of a software developer’s activities.  A bug is obviously the worst enemy of a Developer.  But how many developers think in the following lines while fixing the defects

Continue reading Attitudes of a Great Software Developer!!

Posted on

ட்விட்டரின் ட்விஸ்ட்ஸ்..

ட்விட்டர் பற்றிய ஒரு இன்ட்ரொ. உங்களுக்கு பேங்க்ல அக்கௌன்ட் இல்லையென்றாலும் பரவாயில்லை. ஆனால்; ட்விட்டர், ஃபேஸ்புக்ல அக்கௌன்ட் இல்லையென்றால் ஒரு தீவிரவாதியாயைப்போல் பார்க்கிறார்கள். ஆகையால் வாருங்கள் உங்களை ஒரு நல்ல பிரஜையாக மாற்றுகிறேன்.

   ட்விட்டரில் அக்கௌன்ட் எப்படி தொடங்குவது என்று பார்ப்போம். அடுப்பில் சுடுத்தண்ணீர் காய்ச்சுவதுப்போல் சுலபமான ஒன்றுதான். என்று அட்ரஸ் பாரில் டைப் செய்யவும். உடனே திரை வந்துவிடாது எனில் நமது நெட் ஸ்பீட் அப்படி. சற்று நேரம் பொறுத்து விரியும்.

        New to twitter? என்ற தலைப்பில் Sign up for twitter என்ற பட்டனை க்ளிக் செய்யவும். தொடர்ச்சியாய்  Join Twitter Today என்று பேஜ் ஒப்பன் ஆகும். அதில் Full name என்ற இடத்தில் உங்களின் உண்மையான பெயரை தருவதும் மாற்றி வைத்து தொடங்குவதும் உங்கள் விருப்பம். ட்விட்டரில் உள்ள வசதி என்னவென்றால் உங்களின் “Personal Identity” எதுவும் தேவையில்லை என்பதே.

         அடுத்து E-mail address கொடுக்கவும். அப்புறம் வழக்கமாக காதலன்(கள்) /காதலி(கள்) பெயர் அதாவது பாஸ்வோர்ட். இறுதியாக Username அடுத்தவரை கவரும் வண்ணமாக முட்டாள், முசுடு, என்று எதை வேண்டுமானாலும் வைத்துக்கொள்ளலாம். இல்லையென்றால் அடுத்தவர்கள் எப்படியும் அழைக்கபோதில்லை என்று அறிவாளி என்றுக்கூட வைத்துகொள்ளலாம்.

        ஒருவழியாக முடித்தபின் Create my account என்ற  பட்டனை பிரஸ் செய்யவும். அதன்பின் ப்ரோபைல் டிசைனில் மானே, தேனே, பொன் மானேன்னு போட்டுக்கோங்க.இனிமே உங்க வாய்தாவை தொடங்கலாம்.

    நீங்கள் யாரென்று தெரியாமல் ஆனால், உங்கள் எண்ணங்களையும், கருத்துகளையும் மட்டுமே தெரியபடுத்தலாம் “படுத்தி எடுக்கலாம்”.  கொஞ்சம் நேரம் இளையராஜா பாடல்கள் கேட்டு ரிலாக்ஸ் செய்வதுப்போல். ப்ரீயா எதையவாது பேசிவிட்டு போகலாம். அதற்கென்று  அதிகமாகவும் பேச முடியாது 140 கேரக்டர்ஸ்ல முடிச்சிடனும் கண்டிஷன்ஸ் அப்பளை ஆகாத ஒரே விஷயம் மரணம் மட்டும்தான். 140 கேரக்டர்ஸ்ல முடிக்க வேண்டும் என்பதால் நன்றாக எழுதுபவர்களுக்கு சிந்தனையை தூண்டும். நிறைய எழுத்தார்வம் மிக்கவர்களுக்கு  நல்ல குலுக்கோஸ்.

    வெட்டியாக பொழுதை கழிக்க விரும்புபவர்களுக்கு இது ஒரு வேஸ்ட் பேப்பர் என்ன வேண்டுமானாலும் கிறுக்கிவிட்டு போகலாம். ஆனால் அதற்கான எதிர்வினை வந்தால் சந்தித்துதான் ஆகவேண்டும். ஏனேன்றால் இது சோஷியல் மீடியா. நீங்கள் தொடர்புக்கொள்ள முடியாத பிரபலங்களை ஃபாலோ செய்து அவர்களுடன் உரையாடலாம். உங்களின் ஃ ஃபாலோவ்வர்ஸ் அதிகம் ஆக ஆக நீங்களும் பிரபலம்தான். நீங்களும் பிராப்ளம் ஆக சாரி பிரபலம் ஆக  என் வாழ்த்துக்கள்!.

Posted on

Tips to win “Paper Presentation”.

 Hai Buddies, I am L.PRIYANKA from NEW PRINCE SHRI BHAVANI COLLEGE OF ENGG. AND TECHNOLOGY. I am going to share the secret of my success; i think it will be useful for you my friends. I am going to share some tricks to win the paper presentation.  Here i am remembering one dialog from the film of  “Muthu”.


“kedaikuradu kadaikama irukadhu.kedaikama irukuradhu kedaikadu.”

If you follow the things that I have given ,surely you will get the prize.i.e. prize kedaikum kedaikama irukadhu .you have to put some hardwork.

I think ,i am  talking too much.Ok let us come to the  matter. In Paper Presentation i got a Prize.So ,  i am going to share my experience.

Some Points to remember,

  • Take an IEEE paper , you can ask me,” why we want to take always IEEE          paper” ,I asked the same question to my mam ,she told it is the standard format for doing / submitting the project ,which is strictly followed in many colleges.
  • we should always obey elders so,let us take IEEE paper and read it thoroughly.
  • Try to find the problem in that paper.
  • Find solution for the problem.
  • If you have implemented and show the solution for the problem then surely you can win the prize.
  • They will ask “What is your contribution in this paper?”.Most of them are simply presenting the paper, and saying just we are introducing the concept.Please don’t do that my friends.
  • Always try to propose the new concept or approach.It will help you to fetch the prize.
  • Put the slides very short i.e. only 15 slides including your concept name,thank you and queries.
  • slide should not have a line ,it should only have a hint that is also for you not for the judge or the audience.
    • Don’t read whatever in the slides because every one will know to read the text.
  • Always try to complete the presentation with allotted time otherwise you may get negative marks for not managing the Time properly.
  • e.Time Management is very important in presentation.
  • It you exceed the time then they tell to conclude the presentation. So better try to complete it earlier.


I have shared all my experience. So, I want you to keep this in mind and do the paper presentation  to win the prize.prize

Thank You.


I would like to thank Arun Prakash and Sridevi  mam to encourage me to write this blogs. Thank You. Thanks for giving me this opportunity.

For more innovative videos follow us on ” “:)