JavaBeat

  • Home
  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)
  • Privacy

Guide On How To Use The scanf And fscanf Function In C++

January 12, 2019 by Krishna Srinivasan Leave a Comment

When programming in C++, there are many situations when you need to read data from a stream or from standard input.

Reading and storing data while classifying it into different formats is a key part of how programs run. Without a means to call upon external data as input, programs would be unable to operate with any data beyond what users input with cin or similar input functions.

The C++ functions scanf and fscanf read data and store data.

  • Scanf (pronounced: scan-eff) reads formatted data from the standard input and places it into a specified location.
  • Fscanf (pronounced: eff-scan-eff) reads formatted data from a pointer indicating a file object that determines the input stream to read data from.
Quick Navigation
How the fscanf and scanf Functions Work
Working with the Format Specifier
Unlocking fscanf and scanf for Multithreading
Example Using scanf to Return Data
Example Using fscanf To Return Data
Are scanf and fscanf Useful For Professional C++ Programmers?


How the fscanf and scanf Functions Work

Man working in his computer

Both of these functions work by reading data and then storing the data according to the parameter format as pointed in their additional arguments. Those arguments point to allocated objects of the type specified within the corresponding format-string.

For both functions, the format string establishes the interpretation of the argument. The format-string can only contain multibyte characters that begin and end with the initial conversion state. In order to use scanf and fscanf in C++, you have to identify the string that format-string actually points to.

These strings can contain any of the following types of characters:

  • White Space Characters. White space characters are specified by isspace(). They include blanks and newline characters. When scanf or fscanf read a white space character, they do not store it. This means that one white space character is essentially the same as any number of white space characters in the format-string – quantity produces no functional difference.
  • Non-White Space Characters. This includes all characters that are not blanks or newlines, with the exception of the percentage sign (%) character. Upon reading a non-white space character, scanf and fscanf will read, but not store, the matching non-white space character specified in the format-string until the next character in the input stream does not match. That is when the function ends.
  • Format Specifiers. Conversion specifications beginning with the percentage sign character specify the type and format of any data that scanf or fscanf needs to retrieve. Upon reading this character, both functions will read and convert the following characters into the values of the format specified. This value must be assigned to an argument in the argument list.

All three of these functions read the format-string from left to right. Every character without a percentage sign preceding should match the sequence of characters called in the input stream.

If the characters do not match, the function terminates with a matching failure. The conflicting character remains in the input stream as if the program never read it.

Once the program reads its first format specifier, it converts the value of the first input field according to that specification in the argument list. The second format specifier tells the program to convert the second input field and store it in the second argument, and so on. This orderly pattern continues until the format-string ends.

Working with the Format Specifier

The format specifier for fscanf and scanf has the following form:

%[*][width][length]specifier

The specifier character at the end of this statement is the most important element of the format specifier. It defines which characters scanf or fscanf will extract, lays out how it will interpret those characters, and identifies their corresponding argument type.

C++ programmers need to be familiar with 12 types of format specifiers:

  • Integers. Specify integers using “i”. This format extracts an unlimited number of positive or negative digits. This is a signed argument, and although it assumes decimal digits, an “o” prefix changes it octal digits and a “0x” prefix introduces hexadecimal digits.
  • Decimal Integers. Specify decimal integers using “d” for a signed argument or “u” for an unsigned argument. This format extracts any number of decimal digits. You can use both positive and negative integers.
  • Octal Integers. Specify octal integers as an unsigned argument using “o”. These digits use a base-eight numbering system. This format supports positive and negative numbers.
  • Hexadecimal Integers. Specify hexadecimal digits with an unsigned argument using “x”. This format supports positive and negative numbers.
  • Floating Point Numbers. Specify a floating number format using “f”, “e”, “g”, or “a”. Floating point numbers can contain decimal points and be either positive or negative. C99-compliant implementations also support hexadecimal floating-point format using the “0x” prefix.
  • Characters. Specify the next character using “c”. If the format specifier features a width other than 1, the function will read that exact number of characters and store them in successive array locations as per the argument. It will not append null characters at the end.
  • Character Strings. Specify strings of characters using “s”. This format lets you specify any number of non-whitespace characters. It will stop at the first whitespace character found and automatically add a null character at the end of the sequence.
  • Pointer Addresses. Specify pointer addresses with “p”. This format extracts a sequence of characters that represent a pointer. The particular format of the pointer depends on the system and library you are using.
  • Scansets. Specify a specific scanset by including the characters you wish to scan for inside a pair of brackets.
  • Negated Scansets. Specify a negated scanset using brackets with a “^” prefix inside the brackets. This will read any characters not defined in the brackets.
  • Count. To store the number of characters read so far from the standard input at the pointed location without consuming input, specify “n”.
  • Identity. Specifying “%” is an identity. One “%” followed by a second “%” is the same as a single “%”.

There are additional sub-specifiers available in the format specifier argument. These can change the function of fscanf and scanf according to commonly needed behaviors.

  • The asterisk [*] is optional. It tells the program to read the data from the stream but to avoid storing it in the location that the argument points to.
  • The [width] sub-specifier tells the reading operation to read a specific number of characters.
  • The [length] sub-specifier tells the program which type of storage to expect according to a series of formats defined by C99.

Unlocking fscanf and scanf for Multithreading

face with blue background

Both fscanf and scanf can work safely in multithreaded applications. fscanf_unlocked and scanf_unlocked are the multithread variants of both these functions.

The unlocked versions of these functions are not thread-safe, so you must call it only when the invoking thread owns the FILE object – for instance, after successfully calling flockfile() or ftrylockfile().


Example Using scanf to Return Data

In order to use scanf, you must include the <stdio.h> standard header in your code. Combined with printf, it allows you to enter and return data quickly. Here is an example:

#include <stdio.h>

int main ()
{

 char str [80];
 int i;
 printf ("Enter your family name: ");
 scanf ("%79s",str);  
 printf ("Enter your age: ");
 scanf ("%d",&i);
 printf ("Mr. %s , %d years old.\n",str,i);
 printf ("Enter a hexadecimal number: ");
 scanf ("%x",&i);
 printf ("You have entered %#x (%d).\n",i,i);  

return 0;
}

This example uses scanf alongside standard input to ask the user to input various types of information. It specifies the type of data it asks for, which allows for deeper implementations within the context of the program. Example responses could include:

Enter your family name: Jackson
Enter your age: 30
Mr. Jackson, 30 years old.
Enter a hexadecimal number: ff
You have entered 0xff (255).

​

Example Using fscanf To Return Data

Functionally, fscanf works in a similar way to scanf. The main difference is that fscanf has to read from a file, which means you have to identify a file to read from and then tell the program what to look for inside the file. For example:

#include <stdio.h>

int main ()

{

 char str [80];

 float f;

 FILE * pFile;

 pFile = fopen ("myfile.txt","w+");

 fprintf (pFile, "%f %s", 3.1416, "PI");

 rewind (pFile);

 fscanf (pFile, "%f", &f);

 fscanf (pFile, "%s", str);

 fclose (pFile);

 printf ("I have read: %f and %s \n",f,str);

 return 0;

}

This example creates “myfile.txt” and writes a float number and a string into the file. The stream then rewinds, and the program reads both of those values with fscanf. The output it produces will look like this:

I have read: 2.718 and e


Are scanf and fscanf Useful For Professional C++ Programmers?

two Woman in stripes working together

Both scanf and fscanf are important steps for any C++ student to pass on the way to becoming a professional programmer. In more advanced scenarios, you can use these functionals to input vector data types and other complex forms of information into a C++ program.

However, it can act unpredictably when dealing with human errors. If a user doesn’t input the exact type of information in the exact format the program expects, it may not successfully read any of the data. This makes it an uncommon sight in environments where users generally expect programs to run predictably even if they make mistakes inputting data.

In C++, both scanf and fscanf generally operate much faster than their iostream equivalents. Commercial applications designed to maximize speed while processing predictable inputs may use them to great effect to achieve that goal.

Filed Under: Java

Beginner’s Guide – Java vs. C++ Which Of This Two Should I Learn?

January 10, 2019 by itadmin Leave a Comment

When it comes to coding, there are tons of languages to choose from. Students new to coding can choose between high-level languages like Python and low-level languages like C or C++.

But which is the best one to start with? Every professional programmer on the planet made a choice like this one at the beginning of their academic career. It may be the first and most important choice that any future programmer has to make.

In many situations, the list of potential first-time coding languages narrows down to two options: Java and C++. The most significant reasons for this revolve around the nature of each language as it pertains to the tech industry as a whole:

  • Java is a hugely popular general-purpose programming language designed to run on nearly any device. It the language of choice for client-server web applications, with 9 million developers using the platform for this purpose.
  • C++ is a low-level programming language commonly used for large software infrastructure projects and for embedded software projects. It offers a more versatile environment suitable for achieving a wide range of programming goals.

 

Identify Your Goals Before Choosing Between Java vs. C++

Two woman working using a laptop

It’s impossible to say whether you should choose between Java or C++ without focusing on your long-term goals. These goals will define which aspects of each language – and later languages you are likely to learn – are the most useful and attractive.

Both Java and C++ programmers make six-figure incomes on average. While there is generally more overall demand for Java, C++ is a fixture of some of the largest and most important institutions on the planet, and it’s a powerful platform for increasing your overall career prospects as a programmer.

To be clear: learning a programming language cannot limit your opportunities, so there is no structural reason why a programming student cannot learn both Java and C++ at the same time. However, focusing on one at a time can be more practical for anyone who doesn’t have the luxury of an unlimited timeframe to learn.

Both of these programming languages have important applications in the tech industry and beyond. Finding out what they do best can be a great help when choosing between them.

Where Java Can Take You

When programmers think of Java, they often think of the following things:

  • Application servers
  • Web applications
  • Mobile applications
  • Desktop and enterprise applications

However, there are many other uses for Java and for languages related to Java. Cross-language development applications like JNBridge allow Java developers to leverage their Java experience even when not working on purely Java-based applications.

Java’s ability to work anywhere makes it a powerful skillset in dynamic environments where lots of users need access to high-level functionalities. This makes Java a great tool for coding cloud-based applications, e-commerce web portals, and customized mobile apps.

Java is also one of the languages of choice for unit testing. Unit testing is the process by which software objects emulate real-world ones in a test environment, passing or failing according to the desired outcome of the object’s activity. Java programmers can learn to do this for robotic and Internet of Things (IoT) applications.

IoT is an important area of Java development. While the industrial side of IoT development is usually covered by lower-level languages, Java is one of the languages of choice when it comes to IoT user interface development, wearable tech applications, and other smart technologies.

Java is also an important language for mobile and browser-based gaming. Android relies on Java for a broad range of games, and some of its most popular apps. But if you want to develop games for console platforms and PC, then C++ is the best place to start.

Java is structurally similar to Ruby, which is an in-demand language for quickly building websites and web applications. The Ruby on Rails framework reduces the amount of time programmers have to spend writing repetitive code by offering a set of agreed-upon conventions that allow for speedy development.

Where Can C++ Take You

Most programmers would agree that being well-versed in C++ is a great foundation for further development in the tech industry. C++ is a lower-level, more fundamental programming language that requires more work (for some) to learn and master than Java.

C++ is the language of choice for a few very important applications:

  • Large, institutional applications like those used by banks, governments, and other institutions.
  • Embedded software designed to operate robots, satellites, consumer electronics, and other hardware devices.
  • Graphics-intensive video games and scientific applications.

C++ is the older of the two languages, and much of Java’s syntax is borrowed from the C++ mindset. While some may find it takes longer to learn, it offers a more robust foundation for further learning. For instance, learning Java is simple for someone who is already familiar with C++, but the opposite is not true.

C++ is part of some of the world’s biggest and most respected brands. Bjarne Stroustrup, the inventor of C++, maintains a list of some of these companies, which include Adobe, Amazon, Apple, Facebook, Google, and more.

Stroustrup’s list illustrates a key difference between Java and C++. Whereas Java is an easy language to learn and write in, which reduces development time, C++ produces the leanest and most effective code for high-impact applications. This is why Lockheed-Martin uses it for mission-critical airplane software – they can afford to keep a C++ programmer busy.

Compared to almost every other language, C++ costs more in terms of development time and costs much less in terms of operational expenses. Programs written in C++ tend to use computer resources more efficiently than those written in Java or other languages. Part of the reason is that C++ programmers can actively route and sort memory threading pathways in ways that other languages don’t support.

Programmers with experience in C++ are more likely to be part of huge projects with large teams in an enterprise environment. In this type of environment, a completed project can easily take up more than a million lines of code, and it’s not uncommon to reach five, six, or even ten million lines. The resulting programs are solid, valuable tools that offer key completely customized functionality to users.

 

Java vs. C++: Long-Term Goals for New Programmers

Man using a laptop

Different types of people will find themselves suited to each of these languages for different reasons. For instance, someone who likes the idea of being their own boss and working as a freelance mobile application developer will have a lot more success with Java than with C++.

However, that same person may be able to leverage their knowledge of C++ to learn a complimentary language like Perl or Python, vastly improving their marketability in a competitive economy.

Most professional computer programmers know multiple languages. It’s very common for programmers to learn new languages on-the-fly when working on projects that require them – experienced developers can become “fluent” in a new computer programming language in a few weeks.

This is an ingrained part of the tech industry. It’s not uncommon for a professional C++ programmer working on a large project to work on a user interface written in Java, reporting to a database written in PHP, and generating reports in HTML and CSS. Each one of these program interfaces gives the entire team a little bit more knowledge of the other programming languages involved.

 

Learn New Languages On a Needs-Oriented Basis

3 People looking at the laptop

One of the best pieces of advice that new programming students can take to heart is to treat new languages as the tools they are. Learning a programming language for fun is interesting at first, but won’t produce a sustainable environment for cultivating professional skills in the long run – not on its own, at least.

Students who learn new programming languages on a needs-oriented basis position themselves to gather the most useful information for their goals in an efficient, results-oriented way.

This means that instead of comparing Java vs. C++ as your first programming language, you may be more successful by asking yourself what kinds of programs you want to make:

  • If you want to build a web scraper capable of handling lots of data, learn Python or Java.
  • If you want to write mobile applications, focus on Java or Apple’s Swift.
  • If you’re into PC and console-based video games, start with C++.
  • If you want to analyze lots of data or write machine learning programs, learn Python or R.
  • If writing embedded systems to make hardware function fascinates you, go for C++.
  • If you want to enter the world of IoT development, rely on either Java or C++ to take you there.

By focusing on the results of your programming skills, you will conveniently sidestep the risk of spending time and energy learning skills you end up rarely using. You will have a clear path towards determining which language is the best fit for your needs and be able to start down the path of becoming a professional programmer.

Filed Under: Java

A Complete Guide To NumPy Functions in Python For Beginners

December 10, 2018 by Krishna Srinivasan Leave a Comment

There is a common saying among low-level language developers like those schooled in C++. They admit that Python improves development time but claim that it sacrifices runtime in the process.

While this is certainly true for many applications, what few developers know is that there are ways to speed up Python operation time without compromising its ease of use. Even advanced Python developers don't always use the tools available to them for optimizing computations.

However, in the world of intensive programming where repeating millions of function calls is common practice, a runtime of 50 microseconds is considered slow. Consider that those 50 microseconds, repeated over a million function calls, translates to an additional 50 seconds of runtime – and that is definitely slow.

NumPy functions like ndarray.size, np.zeros, and its all-important indexing functions can radically improve the functionality and convenience of working with large data arrays.

Quick Navigation
What Is NumPy and What Is It for?
Basic NumPy Functions
How to Create Arrays
Creating an Array from a Python List
Making a Placeholder Matrix Using NP.Zeros
Other Placeholder Arrays: NP.Ones and NP.Empty
Creating a Sequenced Array Using aRange and LinSpace


What Is NumPy and What Is It for?

Girl using laptop

On its own, Python is a powerful general-purpose programming language. The NumPy library (along with SciPy and MatPlotLib) turns it into an even more robust environment for serious scientific computing.

NumPy establishes a homogenous multidimensional array as its main object – an n-dimensional matrix. You can use this object as a table of same-type elements indexed by positive integer tuples.

For the most part, only Python programmers in academic settings make full use of these computational opportunities this approach offers. The majority of other developers rely on Python lists. This is a perfectly feasible method for dealing with relatively small matrices, but it gets very unwieldy when dealing with large ones.

For example, if you were trying to create a cube array of 1000 cells cubed – a 1 billion cell 3D matrix – you would be stuck at a minimum size of 12 GB using Python lists. 32-bit architecture breaks down at this size, so you're forced to create a 64-bit build using pointers to create a "list of lists". If that sounds inefficient and infeasible, that's because it is.

With NumPy, you could arrange all of this data into a 64-bit build that takes up about 4 GB of space. The amount of time it would take to manipulate or compute any of that data is much smaller than if you try to implement an iterative, nested Python list.


Basic NumPy Functions

In order to use Python NumPy, you have to become familiar with its functions and routines. One of the reasons why Python developers outside academia are hesitant to do this is because there are a lot of them. For an exhaustive list, consult SciPy.org.

However, getting started with the basics is easy to do. Knowing that NumPy establishes an N-dimensional matrix for elements of the same type, you can immediately begin working with its array functions.

NumPy refers to dimensions as axes. Keep this in mind while familiarizing yourself with the following functions:

  • ndarray.ndim refers to the number of axes in the current array.
  • ndarray.shape defines the dimensions of the array. As mentioned earlier, NumPy uses the tuple of integers to indicate the size of arrays on each axis. This means that a matrix with n rows along m columns, shape is defined as (n,m). Its length of the shape tuple is equal to ndarray.ndim.
  • ndarray.size counts the number of elements that make up the array. It will be equal to the product of the individual elements in ndarray.shape.
  • ndarray.dtype describes the elements located in the array using standard Python element types or NumPy's special types, such as numpy.in32 or numpy.float64.
  • ndarray.itemsize refers to the size of each element in the array, measured in bytes. This is how you determine how much NumPy has actually saved you in terms of storage space.

Using these functions to describe an array in NumPy would look something like this:

>>> import numpy as np

>>> a = np.arange(15).reshape(3, 5)

>>> a

​

array( [[ 0,  1, 2, 3, 4], [ 5, 6,  7, 8, 9], [10, 11, 12, 13, 14]])

>>> a.shape

(3, 5)

>>> a.ndim

2

>>> a.dtype.name

'int64'

>>> a.itemsize

8

>>> a.size

15

>>> type(a)

​

<type 'numpy.ndarray'>

>>> b = np.array([6, 7, 8])

>>> b

array([6, 7, 8])

>>> type(b)

<type 'numpy.ndarray'>

The example defines an array as a and then identifies the size, shape, and type of its elements and axes.


How to Create Arrays

Since NumPy is all about creating and indexing arrays, it makes sense that there would be multiple ways to create new arrays. You can create arrays out of regular Python lists and create new arrays comprised of 1s and 0s as placeholder content.

Creating an Array from a Python List

If you have a regular Python list or a tuple that you would like to call using a NumPy array, you can create an array out of the types of elements in the called sequences. This would look like the following example:

>>> import numpy as np

>>> a = np.array([2,3,4])

>>> a

​

array([2, 3, 4])

​

>>> a.dtype

dtype('int64')

​

>>> b = np.array([1.2, 3.5, 5.1])

>>> b.dtype

​

dtype('float64')

In this example, there is a specific format for calling the np.array that many NumPy beginners get wrong. Notice the parenthesis and the brackets around the list of numbers that comprise the argument:

>>> a = np.array([x,y,z])

Most coders new to NumPy will only use parentheses, which establishes multiple numeric arguments. This will result in a botched array and potentially many hours of frustrated debugging followed a final "ah-hah!" moment.

Understanding how np.array works is actually quite simple. It will transform sequences of sequences into a two-dimensional array. It will transform sequences of sequences of sequences into a three-dimensional array, working in the same way to the nth degree.

This is one of the main ways that NumPy actually delivers on its promise to radically optimize indexing for very large arrays. It functions as a "list of lists" but does so using a matrix of arbitrary dimensions.

Making a Placeholder Matrix Using NP.Zeros

It's very common for programmers to have to create an array for an unknown set of elements. Even if you don't know the values of the elements themselves, it's easy to determine the size of the matrix. If you know the size of the matrix, then you can create a placeholder array in NumPy and fill it with placeholder content – in this case, with zeroes

Creating a placeholder matrix full of zeroes allows you to establish the size of an array at the outset of your NumPy session. This way, you won't have to grow the array later on, which is a complicated and expensive operation that you generally want to avoid unless absolutely necessary.

Here's an example of how np.zeros works: and np.ones work:

>>> np.zeros( (3,4) )

array( [[ 0.,  0., 0., 0.], [ 0., 0.,  0., 0.], [ 0., 0.,  0., 0.]] )

Notice that in NumPy, you have to spell np.zeros exactly as written. There is no such command as np.zeroes, as many beginning NumPy users find out when their first placeholder arrays don't load as expected.

Other Placeholder Arrays: NP.Ones and NP.Empty

There are other placeholder arrays you can use in NumPy. The two main ones are np.ones and np.empty. Both of these establish a dtype for the created array, which is set by default to float64 – a floating 64-bit build.

Here is an example of how to create an np.ones array in Python using NumPy:

>>> np.ones( (2,3,4), dtype=np.int16 )  

# dtype can also be specified

array( [[[ 1, 1, 1, 1], [ 1, 1, 1, 1], [ 1, 1, 1, 1]], [[ 1, 1, 1, 1], [ 1, 1, 1, 1], [ 1, 1, 1, 1]]], dtype=int16 )

You can use np.empty to create an uninitialized array of random data. NumPy generates this data depending on the state of your memory:

>>> np.empty( (2,3) ) ​

# uninitialized, output may vary

array([[  3.73603959e-262,   6.02658058e-154, 6.55490914e-260], [ 5.30498948e-313,   3.14673309e-307, 1.00000000e+000]])

Creating a Sequenced Array Using aRange and LinSpace

If you maintain a floating 64-bit dtype, you can use np.arange to create a sequenced array much in the same way a standard Python programmer uses range to return lists. Here are two examples of how you can create a sequenced array:

>>> np.arange( 10, 30, 5 ) 

# Multiples of 5 between 10 and 30

array([10, 15, 20, 25])

>>> np.arange( 0, 2, 0.3 ) # Compatible with float arguments like 0.3

array([ 0. ,  0.3, 0.6, 0.9,  1.2, 1.5, 1.8])

You should notice, however, that there is no way to predict the number of elements you'll get out of the np.arange function using floating point arguments. This is a built-in limitation of the precision that floating point build architecture can offer.

For this reason, NumPy programmers typically prefer to use np.linspace with an argument describing the number of elements they want. An example of how this works looks like so:

>>> from numpy import pi

>>> np.linspace( 0, 2, 9 )​

# 9 numbers between 0 and 2

array([ 0.  , 0.25, 0.5 ,  0.75, 1. , 1.25,  1.5 , 1.75, 2. ])

>>> x = np.linspace( 0, 2*pi, 100 )        # useful to evaluate function over multiple points

>>> f = np.sin(x)

This is just the start to using NumPy functions like np.zeros to create and manage arrays of data. You can use these to build your first price graphs, Big Data comparison tables, and more.

Filed Under: Java Tagged With: A Complete Guide to NumPy Functions in Python For Beginners, Guide to NumPy Functions in Python, np.zeros, NumPy Functions, NumPy Functions in Python, Python

Beginners Guide On How to Use the CIN Function in C++

December 7, 2018 by Krishna Srinivasan Leave a Comment

C++ is a powerful, complex programming language particularly well-suited to solving sophisticated problems. It may not feature the elegant simplicity high-level programming languages like Java and PHP enjoy, but it is a must-have tool for developing operating systems, software drivers, generalizable web servers, and embedded systems.

As a mature, performance-based coding language, it can run nearly anywhere and benefits from a large community of developers. It's the language of choice when creating large-scale commercial applications – huge systems designed for fast, efficient performance.

Before getting to those systems, beginning C++ coders need to become deeply familiar with some of the language's main functions. CIN (pronounced see-in) is one of them.

CIN combines with COUT (see-out) under the standard header <iostream> to allow programs to input and output data. Using CIN and COUT to become acquainted with the way C++ works is an important step for any future programmer.

In order to use this tutorial, you will need to use Microsoft Visual Studio for Windows or XCode for OS X. These are the compilers you will use to execute the C++ code examples below.

Quick Navigation
Writing Simple Code Using CIN in C++
Writing Code with CIN.getline in C++
What Is the Difference Between CIN.get and CIN.getline?
One More CIN Function: CIN.Ignore
Introducing Strings and Stringstream

Writing Simple Code Using CIN in C++

Laptop screen with the sample of CIN,

When your code compiler sees CIN, C++ expects to see an input. Usually, the best option for inputting value to a C++ program is the computer keyboard. This can change with more advanced use cases, but for the purpose of this guide, the keyboard is all you need.

One of the simplest possible CIN functions is assigning a numerical value to an integer. Most C++ students will write and compile a program like this one at some point in their introductory courses:

#include <iostream>
using namespace std;

int main()
{
     int x;

     cout <<"Enter a number";  //prompts the user to input a value
     cin >> x; //assigns that value to the variable x
     cout << endl << "You entered: " << endl;
     cout << x << endl; //Shows the value of x as assigned

return 0;
}

The most important thing to know about this barebones snippet of C++ code is what happens after int main(). The code that says int x defines the variable x so that you can you can use it later on. In this case, you are using it to define whatever the program's user inputs as x.

You'll notice that CIN directly takes the user input and assigns it to the variable without interpreting or qualifying it. The extraction operator >> tells the program to read the following non-blankspace characters. If you want to treat user inputs in a more sensible, human-readable way, you need to use CIN.get and CIN.getline.


Writing Code with CIN.getline in C++

Here is a simple program that uses the member function to define the variable the CIN is supposed to get:

#include <iostream>
using namespace std;

int main()
{
     char name[20], address[20];
     cout << "Name: ";
     cin.getline(name, 20);

     cout << "Address: ";
     cin.getline(address, 20);
     cout << endl << "You entered " << endl;
     cout << "Name = " << name << endl;
     cout << "Address = " << address << endl;

return 0;
}

In this case, the program defines the user input as part of a member function, defined by char name[20] and address[20]. When someone starts this program, the command line will prompt the user to input a name and then input an address. It will then display the name and address entered.

The ability to define inputs as member functions allows you to use cin.get and cin.getline to treat user input as parts of a function. This is a simple and straightforward way to get input from a user and then manipulate it through programming.


What Is the Difference Between CIN.get and CIN.getline?

CIN.get and CIN.getline in very similar ways. Both of them interpret input data as part of the defined member function. However, there is a subtle difference in the way each one works.

  • CIN.get includes white-space characters. This can include delimiters and newline characters, like the Enter/Return button.
  • CIN.getline restricts its function to the line in question. It removes the terminating character of the line, giving you a neat line that you can then manipulate further inside the program.

Since most people expect to terminate input by hitting Enter/Return, beginning programmers should use CIN.getline for user input. The only time you would want to use CIN.get is when you want the include terminating characters, or if you want to use a more specific means to identifying what part of the user's input the program will use.

For instance, if you define a member function using char line[25] and then use cin.get( line, 25 ), your program will terminate anything beyond the 24th character the user inputs.

CIN.get and CIN.getline exist to allow programmers to qualify user inputs. If you build a program that calls directly from CIN, you run the risk of the program crashing every time the user makes a tiny mistake. This is poor program behavior – people expect programs to function predictably regardless of whatever input they give it.


One More CIN Function: CIN.Ignore

If you want to extract a certain amount of characters from a user input and want to include blankspace characters in the content, you can use CIN.ignore to do so. This function looks like this:

cin.ignore( int nCount = 1, int delim = EOF );

Where nCount refers to the number of characters to extract and delim defines what the delimiter character is. The default delimiter is the EOF, which stands for end-of-file. Here is an example how to use CIN.ignore in a simple program:

// istream::ignore example

#include <iostream>

     

int main ()

​{

     char first, last;

 

     std::cout << "Please, enter your first name followed by your surname: ";

 

     first = std::cin.get();     // get one character

     std::cin.ignore(256,' ');   // ignore until space

     last = std::cin.get();      // get one character

 

     std::cout << "Your initials are " << first << last << '\n';

 

     return 0;

}

This program will take ask the user for a full name and surname in one space, and then define two variables based on where the user puts a space. The text before the space is the first name and the text after the space is the last name.


Introducing Strings and Stringstream

If you combine Stringstreams with CIN, C++ compilers can treat strings of characters just like the other fundamental types of data that you're used to working with.

This requires the use of CIN.getline. If you use CIN.get, the spaces that a user puts between words will tell the program to end the input – you will only ever get a single word, rather than a phrase or sentence.

In order to use strings, you will have to include the string stream standard header and then define the CIN input using (mystr). Consider this example:

// cin with strings
#include <iostream>
#include <string>
using namespace std;

int main ()
{
     string mystr;
     cout << "What's your name? ";
     getline (cin, mystr);
     cout << "Hello " << mystr << ".\n";
     cout << "What is your favorite sports team? ";
     getline (cin, mystr);
     cout << "I like " << mystr << " too!\n";
 

    return 0;
}

In this example, the program will greet the user using his or her name and then say it likes the same sports team the user defines. In both cases, the program uses the same variable to refer to the input variables.

During the second call of the string identifier (mystr), the program simply uses the most recent input string. If you use

If you use the standard header <sstream>, you can define a stringstream in such a way as to treat a user input string as a stream. This allows you to perform extraction or insertion operations with strings the same way you do with CIN and COUT.

The stringstream is exactly what it sounds like. It's a stream of user-input strings that is created by telling the program to throw away the previous input and replace it with a new one.

This behavior allows you to write programs that handle inputs in a more organized and less heavy-handed way than identifying every single variable that a user may input. Consider this example:

// stringstreams

#include <iostream>

#include <string>

#include <sstream>

using namespace std;

 

int main ()

{

     string mystr;

     float price=0;

     int quantity=0;

 

     cout << "Enter price: ";

     getline (cin,mystr);

     stringstream(mystr) >> price;

     cout << "Enter quantity: ";

     getline (cin,mystr);

     stringstream(mystr) >> quantity;

     cout << "Total price: " << price*quantity << endl;

​

     return 0;

}

This program gets numeric values directly from user input and multiplies them. Instead of using CIN directly, it uses getline to manipulate user input prices and treat them as variables, defined as price and quantity.

Without stringstream functionality, writing a simple program like this would require defining each input integer individually as x or y and then instructing the program to manipulate each one in particular. This can become unwieldy when dealing with a large number of integers in a complex program.

This allows the programmer to separate the process of obtaining user input from the process of interpreting that input as data. This way, the user gets the experience he or she expects, while the programmer enjoys greater control over the way the program manipulates and transforms data according to its algorithms.

Filed Under: Java Tagged With: Beginners Guide On How to Use the CIN Function in C++, C++, CIN, CIN C++, COUT, guide CIN in C++

The Definitive Guide To Free Hadoop Tutorial For Beginners

December 5, 2018 by Krishna Srinivasan Leave a Comment

Hadoop. Hive. Spark. HBase. Pig. Flume. The list goes on. Is this a programming course or a children's television show?

While the names of the many technologies that make up the Hadoop ecosystem are certainly inventive, they represent serious value to ambitious programmers who want to handle Big Data projects.

Apache Hadoop is a software framework that encompasses a variety of technologies designed to solve problems in the Big Data environment. Almost every tier-one global corporation in the tech industry uses Hadoop for managing data: Amazon, Alibaba, Facebook, Adobe, LinkedIn, Spotify, Twitter, Yahoo!, and more.

Markets and Markets expects the worldwide market for Hadoop technology to grow to $40 billion by 2021. There has never been a better time to learn how to use this software framework.

But extensive Hadoop courses tend to be expensive. Fortunately, comprehensive tutorials that can introduce new programmers to the Hadoop environment are available for free.

Quick Navigation
What Do I Need to Start Learning Hadoop?
10 Free Hadoop Tutorials for Beginners
1. CognitiveClass.AI
2. Cloudera Essentials
3. Coursera
4. edX
5. Hortonworks
6. IBM DeveloperWorks
7. Udemy
8. Guru99
9. CoreServelets
10. Microsoft Virtual Academy
Start Learning Hadoop Today


What Do I Need to Start Learning Hadoop?

Ladies learning hadoop

Generally, the only thing you need to begin learning how to work in Hadoop is a fundamental knowledge of Java or Linux. If you want to get started on either one of those before diving into Hadoop, try these useful resources:

  • edX has free Java courses that will provide you with the foundation you need to understand Hadoop.
  • The Linux Foundation is a great place to start learning how to work with Linux.

If you are familiar with C++ or Python, you also have a good starting point for learning Hadoop. Once you're comfortable with your skills and ready to find out what Hadoop can do for you, any of the following free Hadoop tutorials is a great place to start.


10 Free Hadoop Tutorials for Beginners

Any one of the following free Hadoop tutorials is a great place to start gaining familiarity with the Hadoop environment. Take the opportunity to explore the forefront of Big Data programming using these platforms as your guide.

1. CognitiveClass.AI

CognitiveClass.AI

CognitiveClass.ai used to call itself Big Data University, and users interested in Hadoop may already be familiar with its previous name. Unlike most free Hadoop tutorials, CognitiveClass.ai offers badges you can add to your portfolio, which function similarly to the certifications that most classes require users pay for.

CognitiveClass.ai offers more than 50 individual courses on subjects and technologies that make up the Hadoop ecosystem. The best place to begin is the Big Data Fundamentals course. After that, you can select between other beginning courses like Hadoop Fundamentals, Spark Fundamentals, or Data Science Fundamentals.

2. Cloudera Essentials

Cloudera Essentials

Cloudera Essentials for Apache Hadoop is an online video course distributed in chapter format. It focuses particularly on the needs of data analysts, administrators, and data scientists. Cloudera also offers courses in SQL analytics using a Hadoop technology called HUE, which segues well into the Hadoop environment by allowing businesses to create their own self-service queries.

Cloudera Essentials works best in combination with Udacity's courses, like the Introduction to Hadoop and MapReduce course that teaches students the fundamental principles behind distributed computing and takes less than one month to complete.

3. Coursera

Coursera website

Coursera is an excellent free Hadoop tutorial service because it relies on courses created in partnership with leading universities. In the case of Hadoop, the University of San Diego offers the most comprehensive curriculum, although not all of its courses are free.

While you can use the free courses to learn Hadoop, paying allows you to earn a legitimate diploma, which can be a powerful incentive for anyone who wants to get a job working for a company in thetech industry.

4. edX

edX

Think of edX as a competitor to Coursera – they operate in largely similar ways. Both offer courses from well-known universities, and both have free Hadoop tutorials available from authoritative sources. Both allow users to take courses for free, but charge for official certification upon completion of a course.

One of the ways that edX differentiates itself from Coursera is by offering courses created by high-tech firms and authoritative individual contributors. Essentially, it's not just university professors who are able to teach courses on edX, but also established professionals in the field.

5. Hortonworks

Hortonworks

Hortonworks is one of the only entrants on this list that is actually a working tech company supporting enterprise-level organizations' Big Data needs. As such, it has a significant advantage transforming abstract concepts into real-life, here-and-now terms. The company offers a number of paid certification courses but offers fundamental Hadoop tutorials to users for free.

Also, Hortonworks offers courses in a range of technologies crucial to getting the most out of Hadoop. These include Apache Spark, Hive, and Tez.

  • Spark is a general Hadoop computing engine that supports a broad range of potential applications.
  • Hive is a warehouse infrastructure for data that allows for ad hoc data queries.
  • Tez is a data-flow programming framework generalized for use in many Hadoop applications and even commercial software outside the Hadoop environment.


6. IBM DeveloperWorks

IBM DeveloperWorks

IBM DeveloperWorks offers free courses on Hadoop and Spark distribution in a comprehensive, go-at-your-own-pace way. It describes the function and scope of every component in the Hadoop ecosystem, from well-known elements like MapReduce to specific tools like Sqoop.

  • MapReduce is a mapping framework that orchestrates the collection and communication of data in the Hadoop system.
  • Sqoop is an import/export tool for transferring bulk data from Apache Hadoop databases to other structured data storage systems, like relational databases.

IBM is constantly updating its courses, but it also leaves older courses up – presumably for archival or reference purposes. Be sure to look at the date of the courses you plan on taking to make sure you're taking the latest and most relevant one.

7. Udemy

Udemy

Udemy offers a wide range of free Hadoop courses and premium content for programmers looking to learn how to work in a Hadoop environment. Udemy is a respected global marketplace for online education, featuring over 80,000 individual courses on a broad variety of subjects.

Most of Udemy's Hadoop courses focus on beginner and intermediate subjects. There is a substantial number of courses focused on Hadoop starter courses, including courses focused on supplementary technologies like Spark and Hive.

8. Guru99

Guru99 website

Guru99 features a single beginner's guide to Hadoop consisting of 14 individual courses to be taken over the course of one week. At an easygoing rate of two courses per day, Guru99 can help students establish fundamental understanding of the Hadoop ecosystem in the shortest amount of time of any entrant on this list.

As is to be expected of a week-long course, guru99's free Hadoop tutorial is a predictably high-level overview. You are not going to become a Hadoop expert in a week, but you can gain sufficient knowledge to best direct your further ambitions towards a specific area of Big Data.

9. CoreServelets

CoreServelets website

CoreServelets offers a comprehensive self-paced Hadoop training course that offers slides, source code, and exercises that are completely free and unrestricted. This course assumes the student already has moderate expertise using Java, but CoreServelets also features a Java programming tutorial that caters to people who want to learn Hadoop.

One of CoreServelets' core business activities is customized on-site corporate Hadoop training, so it's clear that the company knows what it's doing when it comes to teaching Hadoop. However, its free tutorial is bound to offer less than the premium service it provides to its paying customers. This free Hadoop tutorial acts as a promotional tool for the full experience.

10. Microsoft Virtual Academy

Microsoft Virtual Academy

Microsoft Virtual Academy offers a wide variety of Big Data analytics video training. In particular, it focuses on a technology suite called HDInsight, which is Microsoft's proprietary managed Hadoop distribution service configured to run on Microsoft Azure.

While Microsoft's Virtual Academy offers sufficient coursework for free Hadoop tutorials, dedicated students will want to opt for the Microsoft Professional Program that offers legitimate certificates of Hadoop mastery.

Start Learning Hadoop Today

Big Data is important, and it will continue to become increasingly important as time goes on. Knowledge workers with Big Data-related skills such as Hadoop expertise will command higher salaries and better job prospects as the overall amount of data that companies need to analyze increases with time.

The fact is that for large corporations, cloud-based analytics offer opportunities for significant cost reductions, better business efficiency, and greater customer satisfaction in a way that manual processes simply cannot match.

With the increasing adoption of Big Data comes big career opportunities for IT-oriented professionals in all fields. From politics to manufacturing and even the sports industry, Big Data capabilities are becoming increasingly important to deliver personalized experiences to users across the world. Hadoop is the framework that makes this possible.

Start with any of these free Hadoop tutorials to start taking advantage of the career options this in-demand skill offers. Even users with no previous experience using Java or Linux can jump into Hadoop with only cursory introductions to the fundamentals of each.

Filed Under: Java Tagged With: beginners, guide, hadoop, Hadoop Tutorials, Hadoop Tutorials for Beginners, Start Learning Hadoop, The Definitive Guide To Free Hadoop Tutorial For Beginners, tutorial

Mastering How to Use C# Enums – Methods And Examples

November 27, 2018 by Krishna Srinivasan Leave a Comment

c# enum

C# enums are a common data type, which creates lists populated with constants and their associated values.

Ever thought about just creating a list in C# that can have a drop-down menu displaying all the sub-elements from that list? Well, Enums make C# code easier to read and work with through such a process. C# enums can call lists by name and specify elements from that list.

Quick Navigation
Here’s What You Need to Know About Using C# Enums
What is an Enum?
How to Classify an Enum?
What Should an Enumerator List Contain?
When Should You Use Enums?
How Does It all Work?
The Basic Way to Structure an Enum:
Here’s an Example of How to Use an Enum:
What Does this Mean?
Here’s a Sample of What the Code Looks Like When Calling a Specific Weekday from the Example Enumerator List Above:
Need More Examples of Enums?
Enum Methods in C#
The GetName Method
The GetNames Method
The Format Method
The GetUnderlyingType Method
The GetValues Method
What Else Do I Need to Know About C# Enums?
man using deskptop

Here’s What You Need to Know About Using C# Enums

What is an Enum?

The C# enums are formally known as strongly typed constants, which means that they are predefined data types in C#.

The keyword “enum” stands for enumerate, and this data type lives up to its name. As the name suggests, the C# enum keyword is used to declare an enumeration that lists related elements together. It is a useful feature while scripting in C# because grouping related elements together in a list can move the pace of calling and returning elements quickly in a program. This makes it easier to read and write C# code with greater precision.

The resulting list that is defined through the enum keyword is often referred to as an enumerator list. The grouping of related elements in an enumerator list makes it easier to call particular elements later in the program’s script.

How to Classify an Enum?

Enum is a value data type in C#, and value data types are generally stored in the stack. It’s used to declare a list of named constants and their integer values in an application.

Every named constant must have an associated numerical value in the enumerator list, even if that value is the positional count of the constant.

What Should an Enumerator List Contain?

Enumerator lists can include constant elements of many numeric data types like byte, sbyte, short, ushort, int, uint, long, and ulong, but these other types have to be cast.

However, whatever data type is chosen, all the constants have to be of the same data type, e.g. byte or short. The enum data type can be applied by using the enum keyword directly inside the namespace, class, or structure.

The enum keyword is used to name the list of constants and their associated integer values, so when the list is called it can return particular constant values. All members of the defined enumerator list are of the enum type. There must be a numeric value for each enum type.

When Should You Use Enums?

If a program uses a set of integral numbers, then consider replacing them with enums. Using enums can improve the readability of the program, which allows it to become easier to maintain.

If a programmer applies this logic of replacing set integral numbers with enums then going back to read, find, and call these enumerator lists (and their elements) should be less troublesome for the programmer. It makes scripting in C# far easier for both experience and inexperienced programmers.

Enums are useful in creating a list of elements to call on later as a group or individually. They help to provide specificity and scope in selecting elements from the list for various use purposes throughout your C# script.

The best case is to define the enum in the namespace so that all classes in the namespace can access it too. However, it is also valid to add an enum within a class or other structure too. Enums can be defined as global values and local values in the C# script.

How Does It all Work?

Start with typing the keyword enum in the namespace to indicate a named list is soon to follow. Define the list with a name, and then within closed curly braces enter in the constants equated to their associated values.

The elements entered in the list should be separated by commas. Don’t forget to close the list in curly braces instead of flat brackets, which is usually used to define list data types in other programming languages besides C#.

It is also important to remember to end with a semicolon after the closing curly brace.

The Basic Way to Structure an Enum:

Enum <name of enumerated list> { <constant names = values> };

Here’s an Example of How to Use an Enum:

enum WeekDays { Monday = 10, Tuesday = 13, Wednesday = 16 }

What Does this Mean?

To call the value of one of the constant elements in the list, start with calling Console then tacking on the WriteLine method with either the name of the enumerated list or any of the particular elements from the list.

Here’s a Sample of What the Code Looks Like When Calling a Specific Weekday from the Example Enumerator List Above:

Console.WriteLine((int)WeekDays.Monday);

What Does This Mean?

Following the computer science conventions, the first element in the enum list starts count from 0.  The count of each successive enum is incremented by 1.

A change in the value of the first constant element in the enum list will reassign the values of the rest of the constant elements to increment accordingly. For example, if the first element in the list is reassigned a value of 10, instead of the default count of 0, then the following element will have a value of 11.

Need More Examples of Enums?

Example 1

Enum Breed { Bulldog, Boxer, Chihuahua, Briar };

What Does this Mean?

In the first example the named constants have countable values associated with them starting from 0. So, the enumerated list, Breed, will have its first value, Bulldog, associated with count 0.

Example 2

Enum Grades { F = 0, D = 1, C = 2, B = 3, A = 4 };

What Does this Mean?

In this second example, the enum has user-assigned values associated with each of the named constants. So, the enumerated list, Grades, has its first value, F, associated with a user-assigned value of 0.

Enum Methods in C#

Enum class contains many useful methods for working with enumerations. The beauty of enum is that you can process it as integer value and display it as string.

Some of the most common enum methods that you can invoke on an enum instance includes the GetName method, GetNames method, Format method, GetUnderlyingType method, and GetValues method.

The GetName Method

This C# enum method returns the name of the constant in the specified enumerated list with a specific value as a string. It takes two parameters as arguments, enum type and value. The first argument, type, has to be specified to set the enumeration type. The second argument, value, is the value of the particular enumerated constant in terms of its underlying type.

The GetNames Method

This C# enum method returns a list/array of constants in the specified enumerated list. It takes one argument, type, which is the enumeration’s type.

The Format Method

This C# enum method converts a specified value of a specific enumerated list to another equivalent data type, usually a string. It takes three parameters as arguments, type, object, and format. The first argument, type, is enum type of the value to convert. The second argument, object, is the value to convert. The third argument, format, is the output format to use.

The GetUnderlyingType Method

This C# enum method returns the underlying type of the specified enumerated list constant value. It takes one argument, type, which is the enum whose underlying type will be retrieved.

The GetValues Method

This C# enum method returns an array of constant values in an enumerated list. It takes one argument, type, which is the enumeration type.

man holding a paper with code text

What Else Do I Need to Know About C# Enums?

C# Enums are strongly typed constants. They are strongly typed, i.e., an enum of one type may not be implicitly assigned to an enum of another type even though the underlying value of their members are the same.

Every enum type automatically derives from System.Enum and thus we can use System.Enum methods on enums. C# Enums are value types and are created on the stack and not on the heap.

The purpose of the C# enum data type is to make code more readable by grouping related elements together under a defined list name. This list can be called later in the script by name and specify particular constants and their values.

Filed Under: Java Tagged With: About Using C# Enums, C# Enums, code, data type, How to Use C# Enums, Mastering How to Use C# Enums, Use C# Enums

How to Convert Hex to Binary: A Step-by-Step Guide

November 19, 2018 by Krishna Srinivasan Leave a Comment

hex to binary

Without even realizing it, you use number conversion all the time. After all, knowing a 5K race means you have to run 3.1 miles is converting from the metric system to US measurement.

Or, if you’re watching an old movie and try to figure out when it was made from the Roman numerals in the copyright statement in the credits, that’s number conversion too.

Unless you’re a programmer, though, you’re probably not familiar with hexadecimal – that is, base 16 – numbering. But, if you do write code, you also know about the binary base-2 system. And, upon occasion, you’re going to need to convert hexadecimal numbers to binary.

So, what exactly are the hexadecimal and binary numbering systems? When are you going to use them? And how do you convert a hexadecimal number to binary? We’ll go over all the pertinent information you need to know below.

The Binary Number System

The numbering system everyone is most familiar with is base 10, also known as decimal or denary. That’s when each number position can have a value from zero to nine.

Most people also recognize the base-2 binary system where every number is represented by a combination of zeros and ones. For example, while the number 1 is represented as 1 in binary, the number 2 is expressed as 10.

Given the overlap in terms of how numbers can be represented, though, it can be confusing to determine which system is being used. To make the distinction clear, a subscript denoting the system can be used after the number: 1010, 10decimal, or 10d for decimal and 102, 10binary, or 10b for binary.

Binary Number System

Binary numbers are used primarily with computers because, at the most basic level, computers only recognize two states: on or off. Each zero or one is a “bit” which is short for “binary digit.” A string of four bits is a nibble, and eight bits are a byte. A byte is also the number of bits needed to represent a character.

For the sake of comparison, here are the numbers 1-10 in both the decimal and binary number systems.

Decimal

Binary

1​

1

2

10

3

11

4

100

5

1010

6

110

7

111

8

1000

9

1001

10

1010

​The Hexadecimal Number System

The hexadecimal system, however, is base 16 as there are sixteen possible values for each digit position. So, you may be asking, how can you have a base-16 system when we only have numerals from zero to nine?

Hexadecimal – or hex, as it’s commonly known – does this by using a combination of numbers and letters: 0-9 plus A-F for values from ten to fifteen. Therefore, while the number 1 in both decimal and hex is 1, the number 10 in decimal would be A in hex, and 25 in decimal would be 19 in hex. To make it clear you’re using hex, a subscript can be used after the number: 1016, 10hex, or 10h.

Again, for the sake of comparison, here are the numbers 11-20 in decimal, hex, and binary.

Decimal

Hex

Binary

11

B

1011

12

C

1100

13

D

1101

14

E

1110

15

F

1111

16

10

10000

17

11

10001

18

12

10010

19

13

10011

20

14

10100

​Hex is particularly useful in computing because of the amount of data compression it allows versus binary numbers because, by default, each hex digit is equivalent to one byte (four binary digits). Given that, while it takes four digits to express 11112 as a binary number, you only need one (F16) to do the same in hex.

As numbers increase in size, this becomes even more helpful. For example, it takes four digits to express 567810 and its hex equivalent 162E16, but this will take thirteen digits in binary: 10110001011102. In addition, converting from hex to binary and vice versa is much quicker – especially when a computer is doing it hundreds or thousands of times per second – than performing the same process with decimal numbers.

Hex is also used in the following computing situations:

  • Displaying error messages: Because memory addresses use hex, this makes resolving error messages easier as they reference these memory locations.
  • Defining colors on web pages: Each primary color on a web page – red, blue, and green – is represented by two hex numbers which, when combined to create other colors, use six hex numbers to represent the amount of each primary color being used.
  • Representing Media Access Control (MAC) addresses: With these twelve-digit hex numbers, the first six identify the adapter manufacturer while the second six are the adapter’s serial number.

Converting from Hex to Binary

Converting from hex to binary is relatively simple – more so than converting either of these to or from a decimal number – as each hex digit is equivalent to four binary digits as per this table:

hex

Binary

0

0000

1

0001

2

0010

3

0011

4

0100

5

0101

6

0110

7

0111

8

1000

9

1001

A

1010

B

1011

C

1100

D

1101

E

1110

F

1111

​By referring to the table above, converting a hex number to binary only requires replacing each individual hex digit with its binary counterpart. For example, let’s convert 162E16 into a binary number using the four steps below.

1. Separate the hex digits

162E16 breaks out into four separate digits: 1, 6, 2, and E.

2. Replace the hex digits with binary equivalent digits

As per the table above, these are the binary representations for each of these hex digits:

hex

Binary

1

0001

6

0110

2

0010

E

1110

​3. Combine the binary digits into a single string

0001 0110 0010 1110 becomes 0001011000101110.

4. Delete any zeros at the beginning of the number

Finally, you need to delete any zeros before the first one in the string. Therefore, 0001011000101110 becomes 10110001011102 after deleting the initial three zeros.

Let’s do two more examples to see how this works in action.

Example 1: Convert 62F7 to Binary

  1. 62F716 breaks out into the individual hex digits 6, 2, F, 7.
  2. 6, 2, F, 7 converts to the binary numbers 0110, 0010, 1111, 0111.
  3. 0110, 0010, 1111, 0111 becomes the single string 0110001011110111.
  4. 0110001011110111 becomes 1100010111101112 after deleting the zero beginning the string.

Example 2: Convert A24B7 to Binary

  1. A24B716 breaks out into the individual hex digits A, 2, 4, B, 7.
  2. A, 2, 4, B, 7 converts to the binary numbers 1010, 0010, 0100, 1011, 0111.
  3. 1010, 0010, 0100, 1011, 0111 becomes the single string 10100010010010110111.
  4. 101000100100101101112 is the final form of this binary number as there are no zeros before the first one which need to be deleted.

Converting from Binary to Hex

On occasion, you’ll need to convert a binary number to hex. This is also relatively simple except for also needing to account for any zeros at the beginning of the binary number which are not present. For example, let’s take another look at 62F716 from above which is expressed in 1100010111101112 in binary form.

1. Working from right to left, break out the binary number into groups of four digits

1100010111101112 becomes 110, 0010, 1111, 0111. You must work from right to left to account for any deleted zeros which began the binary number. With this binary number, for example, after moving from right to left, the first set of numbers only has three digits: 110. This indicates you need to put a zero at the beginning of it.

2. Convert individual binary 4-digit groups to hex

0110, 0010, 1111, 0111 converts to 6, 2, F, 7.

3. Combine the hex digits into a single string

6, 2, F, 7 becomes 62F716.

Online Hex to Binary Converters

While it’s useful to know how to convert numbers from hex to binary, this can be a labor-intensive process doing one number after another. Given that, you should consider using one of these online hex-to-binary converters:

  • Code Beautify
  • Browserling
  • BinaryHexConverter

number codes

Hexadecimal & Binary Number System Resources

Now that you know the basics of converting hexadecimal numbers to their binary equivalents and vice versa, you may want to know more about how each of these numbering systems work in relationship to each other, especially in terms of computing. To that end, here are some recommended resources to check out:

  • SparkFun: This extensive hexadecimal tutorial covers the basics of using hex as well as converting to and from both binary and decimal numbers and provides a handy conversion calculator.
  • Tutorials Point: This tutorial shows how to add and subtract using hexadecimal numbers.
  • Grinnell College Math Department: Topics covered include binary number theory, negation in the binary system, converting from binary to decimal, and performing addition, subtraction, multiplication, and division using binary numbers.

Remember: Whether you convert from hex to binary by hand or use a converter, understanding how these numbering systems work and their use with computers is key for every programmer.

Want to learn more about using arrays in Java? Check out our guide which covers one- and two-dimensional arrays as well as the methods you should use with them.

 

Filed Under: Java Tagged With: Binary, Convert, decimal, Hex, Java, metric system, System

Python Vs Java: Which Should I Learn? – Interpreted Vs Compiled Languages

November 12, 2018 by Krishna Srinivasan Leave a Comment

python vs java

More than ever before, computer programming skills are in demand. Job sites such as monster.com are flooded with requests for applicants with computer knowledge. Careers in computer programming are lucrative — many pay salaries of $100,000 or higher. So, the temptation for a job seeker to learn a programming language is high.

But what language should an aspiring computer programmer learn? While there are many options available to the beginner, certain languages probably should be avoided. Assembly, for example, is an extremely difficult language to learn, especially as a first programming language, and is probably not appropriate. Other languages simply are not in demand, and also should be avoided. These languages include Visual BASIC and Pascal, among others. While these languages have their adherents, they are not popular and so finding a career only knowing one of these languages will not be as easy as it would be if the student knows a more popular language.

Two programming languages which are popular and can be paths to lucrative careers are Java and Python. Java is a commonly used programming language which is responsible for creating every Android app ever made. No matter what Android app you purchase on your smartphone, the underlying code is Java. In the contest between Python vs, Java, Java certainly can be seen as a legitimate choice.

Quick Navigation
Python: Some Advantages
Interpreted Vs Compiled Languages
Disadvantages of Python
Language Syntax – A Quick Summary

Python, on the other hand has been used to create such Internet mainstays as BitTorrent, Dropbox, and Cinema 4D. While newer than Java, Python is ferociously popular, and has widespread support. When considering Python Vs Java, Python is also a valid choice for the beginning programmer.

However, to choose between these two languages can be complicated. In most surveys, Java (not to be confused with JavaScript!) is rated as the most popular programming language in the world. Meanwhile, Python is usually ranked as one of the top five most popular programming languages in the world. While Python is no slouch in the popularity category, the advantage here goes to Java.

However, there are also other factors to be considered. Python also has a lot of support, and there are probably nearly as many libraries for Python as there are for Java. Python also seems to be becoming more popular as a language, while Java is on a slow decline. Java still will be relevant for years to come, but the “trendy” language right now appears to be Python. So, the decision between Python Vs Java isn't quite as clear as it seems at first glance.

Python: Some Advantages

Why is Python becoming so popular? The first and most obvious reason is that Python is more flexible than Java and related languages (C, C++, C#). Java is a static language, while Python is a dynamic language. A static language has some defining characteristics. According to Stackify.com, variables must be defined before they are used by the programmer. In other words, the coder must take a line of code to “tell” the computer what a variable means before using it in their program. This takes time and Python doesn’t require that a programmer complete this step while coding and may introduce variables at their convenience.

Also, Java requires the programmer to declare run types before it can compile (more on compiling later). So, not only does the programmer have to tell the computer the value in the variable before it can be used, the programmer must tell the computer what type of information the variable holds. This obviously is more inflexible than Python, which doesn’t require any of that information to create a variable. Python definitely has an edge in flexibility in considering the benefits of Python Vs Java.

Interpreted Vs Compiled Languages

According to Dzone.com, Java is a language that needs to be compiled, although the compilation process for Java is a little different than for most other languages. When a program written in Java is finished and needs to be run, the programmer must process the Java program through a compiler, which is a program that interprets the Java and converts it to Java Virtual Machine byte code. This code is then compiled or interpreted further, depending on the type of program and/or the performance desired.

Disadvantages of Python

Meanwhile, Python is an interpreted language. This means that Python doesn’t need to go through a compiler and runs without having to be converted to machine language first. Instead, an automated interpreter converts Python code to machine language on the fly.  So, Python doesn't need to go through the time-consuming step of compilation, which is another advantage to consider if the coder is considering Python Vs. Java.

If Python is so much more flexible and easier to work with than Java, why does anyone work with Java at all? Well, unfortunately, all of this speed and flexibility has a downside. First of all, as with most dynamic languages, it is very easy to accidentally create a variable in Python. For example, consider this situation.  A Python programmer has created a variable named “CASH”. This variable is used throughout a 2,000-line program. However, in one instance, the programmer accidentally made a typo and typed in “CAHS” instead of “CASH”. Because the programmer is using Python, they have just created a new variable (and probably a nasty bug as well)!

Because Java is so rigidly defined, it is nearly impossible to create a variable in this manner. The complier will catch the programmer’s mistake and point out which line contains the typo. The program will not run, and the programmer will fix the error and re-compile. This advantage is an excellent reason to consider Java when debating Python Vs Java.

Unlike Java, Python will attempt to interpret any and all code that the programmer types in. Sometimes this results in unpredictable results, which are difficult to troubleshoot. In our instance noted above, for example, there technically is no error in Python. The programmer has simply created a new variable named “CAHS” which is legal in that language. The program will run and the new variable may do nothing or it may interfere with the legitimate “CASH” variable. Much would depend on the programming or the whim of the interpreter, which the programmer cannot control.

Also, the use of a interpreter comes at a cost. While it is undeniably easier for the programmer to simply click “Run” and have the program execute, a program using an interpreter will generally be slower than a program that has been compiled and has already been converted to machine code. Quite simply, the interpreter cannot convert as quickly on the fly. There are exceptions to this rule, and there are instances in which Python is faster than Java while running certain processes. But these instances are rare, and generally Java is quicker than Python when running programs.

One instance in which Python does have a speed advantage over Java is regarding GPUs (graphics processing units). While most of us think of GPUs as being handy for playing Doom, Python can utilize multiple cores on a GPU to work in tandem in order to run processes. Meanwhile, Java isn’t as efficient in utilizing a GPU, although Java is generally better than Python in utilizing all of the cores on a CPU (central processing unit). Because much of modern programming is still CPU-dependent, this should probably be considered a benefit for Java when thinking about Python Vs Java.

Language Syntax – A Quick Summary

Other differences between the two languages are subtler but are worth mentioning. Java encloses their commands in braces, which is a common syntax if the programmer is familiar with HTML, CSS or C++. However, Python does not use braces and instead uses indentations in order to separate commands. While Java programmers can get subjectively sloppy with braces, Python programmers must maintain a high level of neatness with their indentations. Otherwise, Python programmers run the risk of introducing errors.

However, Python is much easier to work with if the programmer is familiar with English, as opposed to Java, which is a bit more difficult to understand for the beginner. Java also uses a philosophical approach called “object-oriented programming” which should be familiar to those who have worked with C++ or JavaScript but may be challenging for the new programmer. Python, on the other hand, allows the programmer to choose an approach that suits them, whether it be object-oriented programming or a different philosophy, according to the University of The People website.

Python is also well-suited for teaching, as code snippets can be included in documents and the snippets will function as intended. This trait holds true for documents with pictures as well and even graphs may be included. Of course, Java needs to be compiled before it can function. Therefore, any inclusion of a document, picture or graph will confuse the compiler, so this is one area in which Python is the clear winner.

java and python

Java and Python are both excellent programming languages that enjoy worldwide support. While both have their advantages and disadvantages, either choice allows the practitioner to enjoy a lucrative and successful career. However, these two languages are very different. Python is a fairly new interpretive language with a different syntax than most established languages, while Java is a more traditional complied language that would be more familiar to those who have used C++ or JavaScript.

Perhaps the deciding factor between these two languages should be how much previous experience the student programmer has had. If the student has had little or no experience, Python—with its easier syntax and flexible philosophy—may be a better choice. However, if the student has had previous experience with C++ or JavaScript, Java may be the superior choice due to its similarities to those two languages.

Filed Under: Java Tagged With: Disadvantages of Python, Interpreted Vs Compiled Languages, Language Syntax – A Quick Summary, Python: Some Advantages

How To Master Multithreading in 2 Different Programming Languages

November 7, 2018 by Krishna Srinivasan Leave a Comment

For decades, computer advertisements have touted the power of multithreading as the answer to customer demands for more power. The number of cores available to a computer has shot up in the last few years, from four to eight or ten. But what is multithreading, and how can a computer programmer access it? This article will take a hard look at multithreading and examine how it can be used for appreciable performance gains in two of the most popular programming languages available today—Java and Python.

What is multithreading?

What is multithreading

Basically, multithreading is an efficient way to run multiple tasks at the same time, often assigned by the same program. These tasks are managed at the smallest level available to the programmer in order to ensure that conflicts between threads are minimal and that system resources are used for maximum benefit. This level of management is referred to as a thread. A thread is an active part of a process, which is an action that a program undertakes. So, in a word processing program, for example, saving your file would be considered a process, while the action of allocating the space to save the file would be considered a thread.

This article deals with user-level threads, which are created by a user or by a user-created program, such as an application. Kernel-level threads (and in Java, Daemon threads) are not covered here. Kernel-level threads are specific to a particular operating system and take a long time for a system to implement, while Daemon threads are automatically created by the Java Virtual Machine in order to serve user-level threads. Neither Daemon threads nor Kernel-level threads are typically created by the average coder, so they are not covered here.

Also, please note that multithreading is not the same term as multitasking. Multitasking happens on a system level and involves memory handling by the OS to ensure that two programs can be working at the same time. Multithreading, on the other hand, occurs at the thread level, which is much smaller, and while it still involves memory handling to an extent, is mostly concerned with assigning threads(and thereby processes) to multiple cores. While multithreading can still be used on a single core machine, the most significant benefits from the technique are gained from using it on multicore computers. If each thread in shared memory is running on a different core on a multicore machine, this technique is known as parallel execution, or more simply as parallelism.

If the programmer is running a multithreaded program on a single core machine, multiple threads are sent to the single processor at once. The processor completes the tasks contained in the threads as quickly as it is able while accepting as many threads as it can. According to oracle.com, this technique is known as concurrent execution, or more simply as concurrency.

Multithreading on a Single-Core Processor – Worth It?

Multithreading on a Single-Core Processor – Worth It?

While it seems pointless to create a multithreaded program for a single-core processor, this may not be the case. Modern processors are extremely fast, and often bottlenecks are caused by the speed of input/output buses and devices, not the processor itself. If one thread demands utilization of a relatively slow hard drive, for example, other threads can be completed in a multithreaded program while the “first” thread is still accessing the hard drive. In comparison, if traditional programming techniques are used, the entire program must wait until the thread accessing the hard drive has been completed.

There can be a downside to creating multithreaded programs for a single-core processor. If the multithreaded program has too many threads running for the processor to handle at once, the threads could conceivably slow down the processor, which of course could cause performance issues. Another issue to be aware of when coding a multithreaded program for a single-core machine is that the threading management process could take up too much memory, actually slowing down the entire computer. This risk varies depending on the type of program being coded as multithreaded, but it can be significant. Even so, the advantages of making a multithreaded program generally outweigh the risks.

How Complicated is Multithreading in Python?

Multithreading in Python

There is another factor that keeps all programmers from creating multithreaded programs in any language of their preference. Quite simply, multithreaded programs are difficult to program and are not recommended for beginners. Coding multithreaded programs involves memory management techniques that can be seen as counter-intuitive, depending on the programming language the coder decides to use.

For example, in Python there are two ways to create threads in a program. The first way, the <thread> module, is a bit older and isn’t often used in modern versions of Python. However, it is popular in legacy code and so it should still be studied by a programmer who truly wants to understand multithreading. The <thread> module is treated as a function, and in modern versions of Python is referred to as the <_thread> module.

However, most modern Python coding uses the <threading> module, which looks superficially similar to the <thread> module, but actually encompasses quite a bit more flexibility than the <thread> module does. The <threading> module can use a function-based approach, as the <thread> module does, but can also take an object-oriented approach to multi-threading which is far easier. So already, the beginning Python thread programmer runs into a situation that can be tricky — namely, two different modules with similar names, but approaches that can be different, depending on the programming.

If the programmer decides to use the <thread> module in order to create a multithreaded application, first a method must be created, according to techbeamers.com. The example that techbeamers provides is “thread.start_new_thread ( function, args[, kwargs] )”, without quotation marks. If the programmer is familiar with methods as used in JavaScript or many other languages, this syntax should look familiar. The method will start a new thread and return its identifier.

When creating a thread based on an object-oriented approach, the <threading> module uses a completely different syntax than the threading structure does. According to techbeamers.com, the first task which must be completed by the programmer in order to use multithreading in the threading module is to construct a subclass from the <Thread> class. Then, the coder must override the <__init__(self [,args])> method to supply arguments. Finally, the coder must then override the <run(self [,args])> method to code the business logic of the thread.

After the threads have been created, then the programmer must use some type of memory management to ensure that no conflicts between threads are created. Conflicts between threads(in which two threads compete for the same memory space) can lead to deadlocks, a situation in which two or more processes are waiting for the same resources and neither can progress until the other yields. This situation can result in highly diminished performance and can even cause an application to “freeze” or crash, depending on the situation.

Fortunately, the Python <threading> module does have some functionality to prevent this situation from occurring, notably the Lock() method. After using the Lock() method, the programmer can invoke the acquire(blocking) method, which forces threads to run synchronously. Synchronization is the procedure in which the programmer can control program flow and allow the program to have access to shared data. In other words, synchronization allows the programmer to decide which threads have priority, when threads can access memory, and even tell threads to wait until a particular condition has been met. This will obviously help keep conflicts to a minimum.

Unfortunately, the Python <thread> module has no such functionality, although certain programming techniques may assist the veteran coder in working with this method. A tutorial on how exactly to work with multithreading using the Python <thread> module is beyond the scope of this article. As can be seen by the information noted above, coding a multithreaded application in Python should only be attempted by an experienced programmer. But what about other programming languages? Are they any easier to work with?

Multithreading in Java

Multithreading in Java

Let’s use Java as an example. Java is known for its multithreading capabilities and has been around long enough for libraries to be created for nearly every situation a programmer can encounter. According to tutorialspoint.com, to code a Java program that’s capable of running multithreaded processes, the programmer should follow the following steps.

As a first step, the coder needs to implement a run() method provided by a runnable interface. This will provide a starting basis for the program to run correctly. The second step would be to instantiate a Thread object using the following constructor −Thread(Runnable threadObj, String threadName). Finally, once the Thread object is created, the programmer may start it by using the Start() method. This is one technique, as detailed on tutorialspoint.com, which will allow the Java programmer to create multithreaded programs. There are other ways to create threads in Java programming, and some are admittedly easier—for example, the Java programmer may simply create threads by extending a thread class, which enables the programmer to create a number of threads at the same time.

There is an undeniable benefit to learning how to code multithreaded programs. Programmed correctly, multithreaded programs run tasks more quickly and efficiently than single-threaded programs. However, it cannot be credibly denied that creating multithreaded programs is a challenge most suitable for experienced programmers. Although it may be tempting to start coding multithreaded programs as a novice, the sensible advice is to wait before attempting a project that is so complicated. While the performance gains are substantial, the chance of programming frustration and possibly even causing your computer to freeze are high. Coding a multithreaded program is a task best left to experienced programmers.

Filed Under: Java Tagged With: how to master multithreading, multithreading, what is multithreading

How To Master Polymorphism In Java – Understanding Polymorphism

November 6, 2018 by Krishna Srinivasan Leave a Comment

Understanding the concept of polymorphism is key to mastering the Java programming language. However, the concept can be a tricky one. What is the best way to understand the concept of polymorphism and how to apply it to our Java programming?

The Origin of Polymorphism

The Origin of Polymorphism

Perhaps the best way to understand the definition of polymorphism in Java is to examine the simple English definition of the word. Polymorphism is the act of being polymorphous, which is to pass through many or various stages, forms, or the like. So, we know that, in typical English usage, polymorphism is the act of changing—of being able to pass through many different stages or forms, some possibly at the same time. This gives us a clue as to what polymorphism means in Java but doesn’t satisfy our need for a definition completely.

The next step could be to examine the usage of the word in other STEM fields, such as biology. The biological definition is similar to the definition listed above — lists the biological definition of polymorphism as “the existence of an organism in several form or color varieties.” In Crystallography polymorphism is defined as crystallization into two or more chemically identical but crystallographically distinct forms, which seems perhaps a bit closer to the definition that we are looking for regarding Java. Finally, in Genetics, the definition of polymorphism is the presence of two or more distinct phenotypes in a population due to the expression of different alleles of a given gene, as human blood groups O, A, B, and AB. While this seems far removed from the world of polymorphism in Java and computers, again we find the idea of different “varieties” (in this case, blood types) of the same “object” (in this case, blood).

So, at least in biology and the “real” world, we know that polymorphism has to do with multiple forms of the same object. However, the definition of the word and the application of the concepts behind it are a little different (and trickier) in Java. To understand polymorphism in Java, you must first be able to understand that Java is object-oriented programming.

A Quick Summary of Object-Oriented Programming

Object-Oriented Programming

Unlike some other programming languages, Java is an object-oriented language. This trait means that all programming in Java consists of creating objects with attributes and assigning behaviors(known as methods) to those objects. Other types of programming such as BASIC and PASCAL are considered procedure-oriented programming. Procedure-oriented programming concentrates on written instruction sets in order to accomplish tasks. The term “subroutines” is commonly associated with procedure-oriented programming, and while this type of programming still has its adherents, it generally is seen as less modern and subsequently less popular.

Java, like many object-oriented programming languages, also allows inheritance, in which parent objects (also known as the superclass of objects) can pass attributes down to their child objects. The programmer can determine the attributes of an object by using an IS-A test. While this test may sound complicated, it actually is quite simple. For example, if the coder creates an animal object as a parent and a cat object as a child, the programmer can simply test the relationship by seeing if the cat object is an animal object. In this case, it would be, and so the relationship is confirmed.

But how does polymorphism in Java affect these procedures? Well, it’s simple. A Java object is polymorphic if it passes the IS-A test for more than more class. For example, the cat object listed above is polymorphic. The cat object IS-A animal object, and it is also a cat object. The cat object falls under two distinct categories and therefore is an example of polymorphism in Java.

Using Polymorphism in Java

Using Polymorphism in Java

“Wait a second!”, some of the more adroit among our readers may claim. That means that EVERY object in Java displays polymorphism, because all objects exist as their type, and as their class object. That is correct, according to tutorialspoint.com. All objects within Java are polymorphic. So if you create an array as an object in the “Cool Array” class, the IS-A test would confirm that the object IS-A array ( as its type) and would also confirm that the array IS-A member of the “Cool Array” class.

In practice, this functionality of Java objects can be quite useful. In our Cat example, the coder wouldn’t need to recode all of the Animal attributes. Because of inheritance and polymorphism in Java, the Cat object would simply inherit all of the animal attributes with no further work necessary. Again, the cat object IS-A animal object and it IS-A cat object. But suppose the coder wanted to create a “subspecies” of cat within this framework? What would the coder do then?

Once again, the answer is simple. The coder should simply extend the class of cat to, say, “mountain cat” and then list the attributes that the coder wishes the cat to have. For example, if the coder wishes for the “mountain cat” object to have big whiskers, then they can simply assign it to have such. Even better, because of inheritance and polymorphism, the “mountain cat” class extension will have all of the attributes of both the Animal class and the “Cat class”. The mountain cat class extension IS-A animal class and IS-A cat class.

Overriding in Java – A Brief Look

Overriding in Java – A Brief Look

What happens though, if the parent class already has an attribute that the coder wishes to change? For instance, if the class “Cat” already has short whiskers listed as an attribute, how could the “Mountain Cat” class extension has long whiskers instead. The answer is simple. When a parent class’s attribute comes into conflict with a child class’s attribute, the child object’s attribute will override the parent class’s attribute. This procedure is known as overriding, and the Java compiler determines it at runtime.

As some readers may already be aware, Java is a compiled language, which means that a program written in this language must pass through an interpreter which translates Java to a machine code. Other languages, such as Python, interpret as the program is running. Overriding takes place AFTER the program is compiled, so the procedure is known as dynamic(or late binding). Overriding isn’t the only instance of dynamic binding, but it is one of the most common. All instances of dynamic binding take place at runtime—while the program is running. That feature is what defines dynamic (or late) binding.

Just a quick reminder, while technically all of Java’s methods are virtual (meaning they can be changed) until labeled as final, using dynamic(also known as late) binding is probably the best solution for emulating C’s Virtual Method. This functionality could only have been made possible due to the possibilities of polymorphism in Java.

Overloading in Java – A Summary

Overloading in Java

While overriding may be intuitive, the concept of overloading may not come quite so easily to the beginning Java programmer. Overloading can only take place when an object contains multiple methods. The compiler identifies each method within the object by its method signature, which is the method’s name and parameter list. Even if the method name is the same across multiple methods, as long as the parameters are different for each method, this technique will successfully be completed.

Except as noted above, the program using overloading is typed and compiled normally. As the program using this technique is being compiled, the compiler will make a decision regarding which method to use by comparing the values in the reference variables to the parameters listed in the method signature and choosing appropriately. An example of this technique is below, modified from an example provided by sitepoint.com.

class DemoOverload{

public int add(int x, int y){ //method 1

return x+y;

}

public int add(int x, int y, int z){ //method 2

return x+y+z;

}

public int add(int w, int x, int y, int z){ //method 3

return w+x+y+z

}

If the coder were to call the class listed above using the following: System.out.println(demo.add(2,3));, then method 1 would be chosen by the compiler, as the parameters match the reference variables. If, instead we called the class written above using System.out.println(demo.add(1,2,3,4));, then method 3 would be called due to the same reason.

Because this action takes place during compilation, this technique is known as compile time polymorphism or as static binding. Overloading is the most common example of compile time polymorphism and has the following rules. The return types may be different for each method. Also, different exceptions may be thrown for each method, which can be useful. Finally, the differing methods may have varying access modifiers. There are other techniques using compile time polymorphism (or static binding), but overloading is the most common and is useful to know at all skill levels of programming.

Understanding polymorphism in Java is key to success in Java for all levels of programming. Many useful techniques such as overriding and overloading arise from mastering polymorphism in Java – the language is objectively weaker without it. While other languages also offer polymorphism, it could be argued that polymorphism in Java is the most successful implementation. The implementation of polymorphism in Java is both flexible and easy to understand. Learning polymorphism in Java should be considered invaluable in furthering a coder’s Java programming career.

Filed Under: Java Tagged With: mastering polymorphism java, polymorphism java, polymorphism java summary

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 48
  • Next Page »

Follow Us

  • Facebook
  • Pinterest

As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

JavaBeat

FEATURED TUTORIALS

Answered: Using Java to Convert Int to String

What is new in Java 6.0 Collections API?

The Java 6.0 Compiler API

Copyright © by JavaBeat · All rights reserved