Pages

The Differences between Computer Science, Computer Engineering, Computer Networking, and Information Technology (IT)





I have always been asked about my vocation and academic background. When I said to them that I am a graduate in computer engineering, they were like, “Huh? Is it computer science?” or sometimes they will go like, “Oh, you’re an IT person”… Then, I would go like, “Err.. No. Not quite.” Sometimes people would just responded to my answer like, “Oh, you’re a computer programmer.”
Still, it didn’t give the closest clue on what I have studied, the weightage of difficulties in my degree and the macro views to my expertise.
So today I have searched the terms and keywords for the differences in the following 4 areas of study. The listings are suggesting on each field’s domain and environment. In addition, I have sketched sets diagram to illustrate on the differences between the four disciplines.
Computer Engineering
  • Marriage of computer science and electrical engineering 
  • Microprocessors 
  • Embedded computing devices
  • Electronic components computation 
  • Desktop, laptop, super computers
  • Software writing
  • Software compilation and optimization
  • Very Large Scale Integration (VLSI)
  • Operating System (OS) design
  • Integrated circuits 
  • Motors, radio, sensors
  • Chipsets
  • Numerical methods
  • Process instructions and operations 
  • Artificial Intelligence (AI)
  • Computer security 
  • Expert Systems
  • Database Management System (DBMS)
  • Computer architecture 
  • Data communications 
  • Computer networking 
  • Information Technology (IT)
Computer Science 
  • Data processing 
  • Algorithms 
  • Applied Mathematics 
  • Instruction language design
  • Human Intelligence 
  • Human Computer Interaction
  • Theoretical and practical approach to computation
  • Sofware writing and coding 
  • Artificial Intelligence (AI)
  • Computer security 
  • Expert Systems
  • Database Management System (DBMS)
  • Computer Architecture 
  • Data communications 
  • Computer networking 
  • Information Technology (IT)
Computer Networking 
  • Data networks 
  • Data exchange 
  • Network topology 
  • Personal computer + Phones + Server + Router + Modem
  • Information Technology (IT)
Information Technology 
  • Internet protocols
  • Telecommunication protocols
  • World Wide Web
  • Markup languages 

So you see, the IT is just a subset of computer networking and the networks are just a subset for both computer science and engineering study. There are a number of areas that computer science overlaps computer engineering including the networking and the IT. On other note, computer engineering has a bigger domain than the computer science because this discipline is a marriage of computer science and electrical engineering.
It also means that usually a computer engineer is able to do the tasks of a computer scientist but not the other way around.
For other auxiliary studies such as knowledge representations, semantic web, Information Retrieval (IR), data mining, cognitive computing and programming languages are just the extension to the above listing of domains.
Credits to the following websites:

4 Types of Memory Leaks in JavaScript and How to Get Rid Of Them

In this article we will explore common types of memory leaks in client-side JavaScript code. We will also learn how to use the Chrome Development Tools to find them. Read on!

Introduction

Memory leaks are a problem every developer has to face eventually. Even when working with memory-managed languages there are cases where memory can be leaked. Leaks are the cause of whole class of problems: slowdowns, crashes, high latency, and even problems with other applications.

What are memory leaks?

In essence, memory leaks can be defined as memory that is not required by an application anymore that for some reason is not returned to the operating system or the pool of free memory. Programming languages favor different ways of managing memory. These ways may reduce the chance of leaking memory. However, whether a certain piece of memory is unused or not is actually an undecidable problem. In other words, only developers can make it clear whether a piece of memory can be returned to the operating system or not. Certain programming languages provide features that help developers do this. Others expect developers to be completely explicit about when a piece of memory is unused. Wikipedia has good articles on manual and automatic memory management.

Memory management in JavaScript

JavaScript is one of the so called garbage collected languages. Garbage collected languages help developers manage memory by periodically checking which previously allocated pieces of memory can still be "reached" from other parts of the application. In other words, garbage collected languages reduce the problem of managing memory from "what memory is still required?" to "what memory can still be reached from other parts of the application?". The difference is subtle, but important: while only the developer knows whether a piece of allocated memory will be required in the future, unreachable memory can be algorithmically determined and marked for return to the OS.
Non-garbage-collected languages usually employ other techniques to manage memory: explicit management, where the developer explicitly tells the compiler when a piece of memory is not required; and reference counting, in which a use count is associated with every block of memory (when the count reaches zero it is returned to the OS). These techniques come with their own trade-offs (and potential causes for leaks).

Leaks in JavaScript

The main cause for leaks in garbage collected languages are unwanted references. To understand what unwanted references are, first we need to understand how a garbage collector determines whether a piece of memory can be reached or not.

Mark-and-sweep

Most garbage collectors use an algorithm known as mark-and-sweep. The algorithm consists of the following steps:
  1. The garbage collector builds a list of "roots". Roots usually are global variables to which a reference is kept in code. In JavaScript, the "window" object is an example of a global variable that can act as a root. The window object is always present, so the garbage collector can consider it and all of its children to be always present (i.e. not garbage).
  2. All roots are inspected and marked as active (i.e. not garbage). All children are inspected recursively as well. Everything that can be reached from a root is not considered garbage.
  3. All pieces of memory not marked as garbage can now be considered garbage. The collector can now free that memory and return it to the OS.
Modern garbage collectors improve on this algorithm in different ways, but the essence is the same: reachable pieces of memory are marked as such and the rest is considered garbage.
Unwanted references are references to pieces of memory that the developer knows he or she won't be needing anymore but that for some reason are kept inside the tree of an active root. In the context of JavaScript, unwanted references are variables kept somewhere in the code that will not be used anymore and point to a piece of memory that could otherwise be freed. Some would argue these are developer mistakes.
So to understand which are the most common leaks in JavaScript, we need to know in which ways references are commonly forgotten.

The Three Types of Common JavaScript Leaks

1: Accidental global variables

One of the objectives behind JavaScript was to develop a language that looked like Java but was permissive enough to be used by beginners. One of the ways in which JavaScript is permissive is in the way it handles undeclared variables: a reference to an undeclared variable creates a new variable inside the global object. In the case of browsers, the global object is window. In other words:
function foo(arg) {
    bar = "this is a hidden global variable";
}
Is in fact:
function foo(arg) {
    window.bar = "this is an explicit global variable";
}
If bar was supposed to hold a reference to a variable only inside the scope of the foo function and you forget to use var to declare it, an unexpected global variable is created. In this example, leaking a simple string won't do much harm, but it could certainly be worse.
Another way in which an accidental global variable can be created is through this:
function foo() {
    this.variable = "potential accidental global";
}

// Foo called on its own, this points to the global object (window)
// rather than being undefined.
foo();
To prevent these mistakes from happening, add 'use strict'; at the beginning of your JavaScript files. This enables a stricter mode of parsing JavaScript that prevents accidental globals.

A note on global variables

Even though we talk about unsuspected globals, it is still the case that much code is littered with explicit global variables. These are by definition noncollectable (unless nulled or reassigned). In particular, global variables used to temporarily store and process big amounts of information are of concern. If you must use a global variable to store lots of data, make sure to null it or reassign it after you are done with it. One common cause for increased memory consumption in connection with globals are caches). Caches store data that is repeatedly used. For this to be efficient, caches must have an upper bound for its size. Caches that grow unbounded can result in high memory consumption because their contents cannot be collected.

2: Forgotten timers or callbacks

The use of setInterval is quite common in JavaScript. Other libraries provide observers and other facilities that take callbacks. Most of these libraries take care of making any references to the callback unreachable after their own instances become unreachable as well. In the case of setInterval, however, code like this is quite common:
var someResource = getData();
setInterval(function() {
    var node = document.getElementById('Node');
    if(node) {
        // Do stuff with node and someResource.
        node.innerHTML = JSON.stringify(someResource));
    }
}, 1000);
This example illustrates what can happen with dangling timers: timers that make reference to nodes or data that is no longer required. The object represented by node may be removed in the future, making the whole block inside the interval handler unnecessary. However, the handler, as the interval is still active, cannot be collected (the interval needs to be stopped for that to happen). If the interval handler cannot be collected, its dependencies cannot be collected either. That means that someResource, which presumably stores sizable data, cannot be collected either.
For the case of observers, it is important to make explicit calls to remove them once they are not needed anymore (or the associated object is about to be made unreachable). In the past, this used to be particularly important as certain browsers (Internet Explorer 6) were not able to manage cyclic references well (see below for more info on that). Nowadays, most browsers can and will collect observer handlers once the observed object becomes unreachable, even if the listener is not explicitly removed. It remains good practice, however, to explicitly remove these observers before the object is disposed. For instance:
var element = document.getElementById('button');

function onClick(event) {
    element.innerHtml = 'text';
}

element.addEventListener('click', onClick);
// Do stuff
element.removeEventListener('click', onClick);
element.parentNode.removeChild(element);
// Now when element goes out of scope,
// both element and onClick will be collected even in old browsers that don't
// handle cycles well.

A note about object observers and cyclic references

Observers and cyclic references used to be the bane of JavaScript developers. This was the case due to a bug (or design decision) in Internet Explorer's garbage collector. Old versions of Internet Explorer could not detect cyclic references between DOM nodes and JavaScript code. This is typical of an observer, which usually keeps a reference to the observable (as in the example above). In other words, every time an observer was added to a node in Internet Explorer, it resulted in a leak. This is the reason developers started explicitly removing handlers before nodes or nulling references inside observers. Nowadays, modern browsers (including Internet Explorer and Microsoft Edge) use modern garbage collection algorithms that can detect these cycles and deal with them correctly. In other words, it is not strictly necessary to call removeEventListener before making a node unreachable.
Frameworks and libraries such as jQuery do remove listeners before disposing of a node (when using their specific APIs for that). This is handled internally by the libraries and makes sure that no leaks are produced, even when run under problematic browsers such as the old Internet Explorer.

3: Out of DOM references

Sometimes it may be useful to store DOM nodes inside data structures. Suppose you want to rapidly update the contents of several rows in a table. It may make sense to store a reference to each DOM row in a dictionary or array. When this happens, two references to the same DOM element are kept: one in the DOM tree and the other in the dictionary. If at some point in the future you decide to remove these rows, you need to make both references unreachable.
var elements = {
    button: document.getElementById('button'),
    image: document.getElementById('image'),
    text: document.getElementById('text')
};

function doStuff() {
    image.src = 'http://some.url/image';
    button.click();
    console.log(text.innerHTML);
    // Much more logic
}

function removeButton() {
    // The button is a direct child of body.
    document.body.removeChild(document.getElementById('button'));

    // At this point, we still have a reference to #button in the global
    // elements dictionary. In other words, the button element is still in
    // memory and cannot be collected by the GC.
}
An additional consideration for this has to do with references to inner or leaf nodes inside a DOM tree. Suppose you keep a reference to a specific cell of a table (a <td> tag) in your JavaScript code. At some point in the future you decide to remove the table from the DOM but keep the reference to that cell. Intuitively one may suppose the GC will collect everything but that cell. In practice this won't happen: the cell is a child node of that table and children keep references to their parents. In other words, the reference to the table cell from JavaScript code causes the whole table to stay in memory. Consider this carefully when keeping references to DOM elements.

4: Closures

A key aspect of JavaScript development are closures: anonymous functions that capture variables from parent scopes. Meteor developers found a particular case in which due to implementation details of the JavaScript runtime, it is possible to leak memory in a subtle way:
var theThing = null;
var replaceThing = function () {
  var originalThing = theThing;
  var unused = function () {
    if (originalThing)
      console.log("hi");
  };
  theThing = {
    longStr: new Array(1000000).join('*'),
    someMethod: function () {
      console.log(someMessage);
    }
  };
};
setInterval(replaceThing, 1000);
This snippet does one thing: every time replaceThing is called, theThing gets a new object which contains a big array and a new closure (someMethod). At the same time, the variable unused holds a closure that has a reference to originalThing (theThing from the previous call to replaceThing). Already somewhat confusing, huh? The important thing is that once a scope is created for closures that are in the same parent scope, that scope is shared. In this case, the scope created for the closure someMethod is shared by unused. unused has a reference to originalThing. Even though unused is never used, someMethod can be used through theThing. And as someMethod shares the closure scope with unused, even though unused is never used, its reference to originalThing forces it to stay active (prevents its collection). When this snippet is run repeatedly a steady increase in memory usage can be observed. This does not get smaller when the GC runs. In essence, a linked list of closures is created (with its root in the form of the theThing variable), and each of these closures' scopes carries an indirect reference to the big array, resulting in a sizable leak.
This is an implementation artifact. A different implementation of closures that can handle this matter is conceivable, as explained in the Meteor blog post.

Unintuitive behavior of Garbage Collectors

Although Garbage Collectors are convenient they come with their own set of trade-offs. One of those trade-offs is nondeterminism. In other words, GCs are unpredictable. It is not usually possible to be certain when a collection will be performed. This means that in some cases more memory than is actually required by the program is being used. In other cases, short-pauses may be noticeable in particularly sensitive applications. Although nondeterminism means one cannot be certain when a collection will be performed, most GC implementations share the common pattern of doing collection passes during allocation. If no allocations are performed, most GCs stay at rest. Consider the following scenario:
  1. A sizable set of allocations is performed.
  2. Most of these elements (or all of them) are marked as unreachable (suppose we null a reference pointing to a cache we no longer need).
  3. No further allocations are performed.
In this scenario, most GCs will not run any further collection passes. In other words, even though there are unreachable references available for collection, these are not claimed by the collector. These are not strictly leaks, but still result in higher-than-usual memory usage.
Google provides an excellent example of this behavior in their JavaScript Memory Profiling docs, example #2.

Chrome Memory Profiling Tools Overview

Chrome provides a nice set of tools to profile memory usage of JavaScript code. There two essential views related to memory: the timeline view and the profiles view.

Timeline view

Google Dev Tools Timeline in Action The timeline view is essential in discovering unusual memory patterns in our code. In case we are looking for big leaks, periodic jumps that do not shrink as much as they grew after a collection are a red flag. In this screenshot we can see what a steady growth of leaked objects can look like. Even after the big collection at the end, the total amount of memory used is higher than at the beginning. Node counts are also higher. These are all signs of leaked DOM nodes somewhere in the code.

Profiles view

Google Dev Tools Profiles in Action This is the view you will spend most of the time looking at. The profiles view allows you to get a snapshot and compare snapshots of the memory use of your JavaScript code. It also allows you to record allocations along time. In every result view different types of lists are available, but the most relevant ones for our task are the summary list and the comparison list.
The summary view gives us an overview of the different types of objects allocated and their aggregated size: shallow size (the sum of all objects of a specific type) and retained size (the shallow size plus the size of other objects retained due to this object). It also gives us a notion of how far an object is in relation to its GC root (the distance).
The comparison list gives us the same information but allows us to compare different snapshots. This is specially useful to find leaks.

Example: Finding Leaks Using Chrome

There are essentially two types of leaks: leaks that cause periodic increases in memory use and leaks that happen once and cause no further increases in memory. For obvious reasons, it is easier to find leaks when they are periodic. These are also the most troublesome: if memory increases in time, leaks of this type will eventually cause the browser to become slow or stop execution of the script. Leaks that are not periodic can easily be found when they are big enough to be noticeable among all other allocations. This is usually not the case, so they usually remain unnoticed. In a way, small leaks that are happen once could be considered an optimization issue. However, leaks that are periodic are bugs and must be fixed.
For our example we will use one of the examples in Chrome's docs. The full code is pasted below:
var x = [];

function createSomeNodes() {
    var div,
        i = 100,
        frag = document.createDocumentFragment();
    for (;i > 0; i--) {
        div = document.createElement("div");
        div.appendChild(document.createTextNode(i + " - "+ new Date().toTimeString()));
        frag.appendChild(div);
    }
    document.getElementById("nodes").appendChild(frag);
}
function grow() {
    x.push(new Array(1000000).join('x'));
    createSomeNodes();
    setTimeout(grow,1000);
}
When grow is invoked it will start creating div nodes and appending them to the DOM. It will also allocate a big array and append it to an array referenced by a global variable. This will cause a steady increase in memory that can be found using the tools mentioned above.
Garbage collected languages usually show a pattern of oscillating memory use. This is expected if code is running in a loop performing allocations, which is the usual case. We will be looking for periodic increases in memory that do not fall back to previous levels after a collection.

Find out if memory is periodically increasing

The timeline view is great for this. Open the example in Chrome, open the Dev Tools, go to timeline, select memory and click the record button. Then go to the page and click The Button to start leaking memory. After a while stop the recording and take a look at the results:
Memory leaks in the timeline view
This example will continue leaking memory each second. After stopping the recording, set a breakpoint in the grow function to stop the script from forcing Chrome to close the page.
There are two big signs in this image that show we are leaking memory. The graphs for nodes (green line) and JS heap (blue line). Nodes are steadily increasing and never decrease. This is a big warning sign.
The JS heap also shows a steady increase in memory use. This is harder to see due to the effect of the garbage collector. You can see a pattern of initial memory growth, followed by a big decrease, followed by an increase and then a spike, continued by another drop in memory. The key in this case lies in the fact that after each drop in memory use, the size of the heap remains bigger than in the previous drop. In other words, although the garbage collector is succeeding in collecting a lot of memory, some of it is periodically being leaked.
We are now certain we have a leak. Let's find it.

Get two snapshots

To find a leak we will now go to the profiles section of Chrome's Dev Tools. To keep memory use in a manageable levels, reload the page before doing this step. We will use the Take Heap Snapshot function.
Reload the page and take a heap snapshot right after it finishes loading. We will use this snapshot as our baseline. After that, hit The Button again, wait a few seconds, and take a second snapshot. After the snapshot is taken, it is advisable to set a breakpoint in the script to stop the leak from using more memory.
Heap Snapshots
There are two ways in which we can take a look at allocations between the two snapshots. Either select Summary and then to the right pick Objects allocated between Snapshot 1 and Snapshot 2, or select Comparison rather than Summary. In both cases we will see a list of objects that were allocated between the two snapshots.
In this case it is quite easy to find the leaks: they are big. Take a look at the Size Delta of the (string) constructor. 8MBs with 58 new objects. This looks suspicious: new objects are allocated but not freed and 8MBs get consumed.
If we open the list of allocations for the (string) constructor we will notice there are a few big allocations among many small ones. The big ones immediately call our attention. If we select any single one of them we get something interesting in the retainers section below.
Retainers for selected object
We see our selected allocation is part of an array. In turn, the array is referenced by variable x inside the global window object. This gives us a full path from our big object to its noncollectable root (window). We found our potential leak and where it is referenced.
So far so good. But our example was easy: big allocations such as the one in this example are not the norm. Fortunately our example is also leaking DOM nodes, which are smaller. It is easy to find these nodes using the snapshots above, but in bigger sites, things get messier. Recent versions of Chrome provide an additional tool that is best suited for our job: the Record Heap Allocations function.

Recording heap allocations to find leaks

Disable the breakpoint you set before, let the script continue running, and go back to the Profiles section of Chrome's Dev Tools. Now hit Record Heap Allocations. While the tool is running you will notice blue spikes in the graph at the top. These represent allocations. Every second a big allocation is performed by our code. Let it run for a few seconds and then stop it (don't forget to set the breakpoint again to prevent Chrome from eating more memory).
Recorded heap allocations
In this image you can see the killer feature of this tool: selecting a piece of the timeline to see what allocations where performed during that time span. We set the selection to be as close to one of the big spikes as possible. Only three constructors are shown in the list: one of them is the one related to our big leaks ((string)), the next one is related to DOM allocations, and the last one is the Text constructor (the constructor for leaf DOM nodes containing text).
Select one of the HTMLDivElement constructors from the list and then pick Allocation stack.
Selected element in heap allocation results
BAM! We now know where that element was allocated (grow -> createSomeNodes). If we pay close attention to each spike in the graph we will notice that the HTMLDivElement constructor is being called a lot. If we go back to our snapshot comparison view we will notice that this constructor shows many allocations but no deletions. In other words, it is steadily allocating memory without allowing the GC to reclaim some of it. This has all the signs of a leak plus we know exactly where these objects are being allocated (the createSomeNodes function). Now its time to go back to the code, study it, and fix the leaks.

Another useful feature

In the heap allocations result view we can select the Allocation view instead of Summary.
Allocations in heap allocations results
This view gives us a list of functions and memory allocations related to them. We can immediately see grow and createSomeNodes standing out. When selecting grow we get a look at the associated object constructors being called by it. We notice (string), HTMLDivElement and Text which by now we already know are the constructors of the objects being leaked.
The combination of these tools can help greatly in finding leaks. Play with them. Do different profiling runs in your production sites (ideally with non-minimized or obfuscated code). See if you can find leaks or objects that are retained more than they should (hint: these are harder to find).
To use this feature go to Dev Tools -> Settings and enable "record heap allocation stack traces". It is necessary to do this before taking the recording.

Further reading

Conclusion

Memory leaks can and do happen in garbage collected languages such as JavaScript. These can go unnoticed for some time, and eventually they will wreak havoc. For this reason, memory profiling tools are essential for finding memory leaks. Profiling runs should be part of the development cycle, especially for mid or big-sized applications. Start doing this to give your users the best possible experience. Hack on!

Open source 25-core processor can be stringed into a 200,000-core computer

Researchers want to give a 25-core open-source processor called Piton some serious bite.
The developers of the chip at Princeton University have in mind a 200,000-core computer crammed with 8,000 64-bit Piton chips.
It won’t happen anytime soon, but that’s one possible usage scenario for Piton. The chip is designed to be flexible and quickly scalable, and will have to ensure the giant collection of cores are in sync when processing applications in parallel.
Details about Piton were provided at the Hot Chips conference this week. The goal was to design a chip that could be used in large data centers that handle social networking requests, search and cloud services. The response time in social networking and search is tied to the horsepower of servers in data centers.
Piton is a rare open-source processor based on the OpenSparc design, which is a modified version of Oracle’s OpenSparc T1 processor.
Many open-source CPUs and architectures are already being designed. A notable architecture under development is RISC-V, which is being used by SiFive to design a new processor. Some open-source processor designs are for fun. For example, Open Core Foundation is trying to provide an open-source design for the SH2 processor, which was in Sega’s 1994 Saturn gaming console.
Companies can take the open-source designs, tweak them, and fabricate chips in factories. Alternately, the chip can be simulated by putting the programmable logic on FPGAs (field-programmable gate arrays), which will then mimic the functionality of the multi-core CPU.
It’s interesting that the researchers chose SPARC as the architecture of choice for it’s design. SPARC is used by Oracle in its high-end servers designed for databases, but the popularity of the architecture is waning. Fujitsu recently said it was booting SPARC in favor of ARM for servers, specifically for the Post-K supercomputer it will deploy in Japan in 2020.




One Piton chip has 25 cores broken up in five lines, a topology widely referred to as a mesh design. Each core operates at 1GHz. Multiple chips in an array can be daisy-chained system through a “bridge” that sits on top of the chip structure. The bridge also links the chip to DRAM and storage.
The mesh design isn’t a new idea as it has been used in chips from companies like Tilera, which is now a part of Mellanox. But what’s unique about Piton is the distributed cache and unidirectional links that would pull all cores together in a large server. The cores also share memory.
Each core has 64KB of L2 cache, totaling 1.6MB for the chip. A mini-router in each core facilitates fast communication with other cores. Each core also has a floating point unit, mostly for large-scale parallel computing.
The core count in CPUs is climbing—especially in server and gaming chips—to provide more computing horsepower. AMD’s upcoming Zen-based chips will have up to 32 cores, while Intel’s latest Xeon E7 server chips have up to 24 cores.
The Princeton researchers claim Piton to be the largest chip in academia. That claim cannot be based on the number of cores in the chip.  A 1,000 core chip called KiloCorehas been designed by researchers at the VLSI Computing Lab at University of California, Davis.
But the 460 million transistors could make Piton the largest chip developed by academia in size. It is small though compared to today’s beefier server and gaming chips with billions of transistors. The researchers fabricated Piton using IBM’s 32-nanometer process.

How to install Google API for Blogger

Google gives limitless access to people who are interested in sharing their views to the world via its tool called Blogger formerly known as Blogspot. Using Google API you can create your own application to post data on Blogger without having to visit the blogger. Here is how to Google API


  1. Open Terminal by pressing CTRL + ALT + T
  2. Type "sudo pip install --upgrade google-api-python-client" in the terminal
  3. Press enter
  4. You are all set.

Simple Guide to Follow to Learn MySQL

This goes for a friend who asked me to make this for him.

Python is very specific and is very open to most of the problems. MySQL on the other hand is very specific to data storage. Hence it requires not just problem solving but also experience to do it better. You can create the same database differently using different experiences.
So what is MySQL?
MySQL is an open-source SQL database management system. Due to its open-source nature, it is widely used to create other application that Manage Databases. (source: www.mysql.com)
The source website is great if you want to know the literature of MySQL. If you want to code in MySQL, there is nothing specific but first learning MySQL basics like:

  1. Learn Query Creation
  2. Learn DBMS and Relational Algebra
  3. Tutorial Point for overview/revising
  4. Practice with real life scenarios
  5. MySQL detailed Literature
P.S: I have not followed this path for myself. I followed this book

What is 'sl' and how to install it on ubuntu?

'sl' is a fun tool that makes fun of people on linux when they make a mistake of typing 'sl' instead of 'ls'. When working in intense environment, where mind needs some small refreshments every now n then, this tool is here to surprise you with a refreshment.
Lets install it and see how fun it is:

  1. Open Terminator by pressing key combination ALT + CTRL + T
  2. Type 'sudo apt-get update' to update database
  3. Type 'sudo apt-get install sl' to install 'sl' 
  4. after installation is complete, type in 'sl' in terminator and enjoy :D

Discuss the difference between the InnoDb and MyISAM. Why would you use one over the other when creating a table?



InnoDB and MyISAM are both storage engines for MySQL with their Pros and Cons.
InnoDB:
  1. gives ACID property with transaction support.
  2. Lock are integerated at row level such that a query working on one row will not lock the entire table from letting other query work on other row.
  3. Supports Foriegn Key constraints
  4. Resistive to table corruption
  5. buffer pools and indexes are very large

MyISAM:
  1. Supports full text search
  2. Easy to learn for beginners
  3. Faster than InnoDB due to table level locks
  4. Completey read intensive.

MyISAM is used when your application doesn't have too many CRUD operations, which in later stages of application development may become troublesome to manage. Although obsolete, MyISAM is still preffered for development of many applications due to its speed.
On the other hand, InnoDB is more secure and reliable in terms of data protection. Due to row level locks, all the ACID and CRUD operations can be performed smoothly without worrying about locks on table. Although  previous versions of InnoDB did not support full-text search feature, MySQL 5.6 and above have incorporated this feature. This allows full text search features to be built effectively and creatively in the application.


Original from CoachCrocodile

How to install Pinta on ubuntu 16.04?

Pinta is image editing tool much like photoshop or gimp and is available for most of operating systems. This tutorial is for installing pinta on linux mainly Ubuntu 16.04.

Here is how to do it.
  1. First we add repository for Pinta to install.
  2. Press "CTRL + ALT + t" to open terminal/terminator
  3. type in "sudo add-apt-repository ppa:pinta-maintainers/pinta-stable" 
    1. Note: Instead of pinta-stable you can add pinta-daily if you like to test latest and the greatest Pinta, but it may be buggy.
  4. Update system package lists: "sudo apt-get update" 
  5. Install pinta: "sudo apt-get install pinta"

How to install Pycharm with umake on ubuntu 16.04

Pycharm is amazing IDE for Python development and companies like HP, Symantec, Twitter, Pinterest, Workiva and many more others use Pycharm as development IDE for Python. Pycharm run on Java so in order to install Pycharm you must have installed Java. At the time if writing this, I was using Pycharm 2016.2 with python 2.7

Although there are many ways to install Pycharm ranging from bash installation to repository installations. I am going to use a particular repository that I believe is most trustworthy and also believe that it is provided by Ubundu developers. It is called umake. In order to install pycharm, we first need to install a tool called umake. Umake lets you install various IDEs without much of a problem and best thing about it is that it is provided by Ubuntu developers.Pycharm installation from umake is so easy that you will forget all other options of installation easily after this.

Here is how to do this.

Follow these steps:
    1. Open terminal/terminator by pressing "CTRL + ALT + t"
    2. type in "sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make" and press Enter
    3. Provide password and press Enter
    4. Wait for it to finish
    5. Now type in "sudo apt-get update"
    6. wait for it to finish
    7. Now type in "sudo apt-get install umake" 
    8. Press Y when asked
    9. wait for it to finish installation
    10. Now type in "umake ide pycharm-community" for Pycharm community edition and "umake ide pycharm-professional" for Pycharm Professional edition.
    11. Wait for it to finish download and install
    12. wait for it... 
    13. here you are Pycharm is installed
    14. To run pycharm, click the PC icon on your icons at the left side pin where a new icon has appeared. Or Search it in applications.

How to install HTOP - terminal based task manager on Ubuntu 16.04

HTOP is a popular task manager for Linux Commandline interface. It even allows you to stop certain tasks as well as start them along with many other options.
Before we move on, a suggestion is that HTOP looks, feels and works best with Terminator. Here is how to install Terminator.

Here is how to install Terminator:
  1. Open Terminal/Terminator with "CTRL + ALT +t"
  2. Type "sudo apt-get install htop" in Terminal/Terminator
  3. provide password
  4. Follow the instructions
  5. wait for it...
  6. here you are... HTOP is installed
  7. To open HTOP, type "htop" in command line and enjoy the fun features
Also, HTOP is the best task manager there is for command-line interface of linux.

How to install Terminator on Ubuntu 16.04

Terminator is amazing tool for developers on Linux. Today we are going to install Terminator on Ubuntu 16.04

  1. Open Terminal by pressing "CTRL+ALT+t"
  2. insert the command "sudo apt-get install terminator"
  3. wait for it...
  4. here you are.. Terminator is installed.
  5. close the Terminal
  6. Now open terminator by pressing "CTRL+ALT+t"
 Thank you