b-sc-agri-course-curriculum.pdf for Karnataka state board
Improve your Tech Quotient
1. Retailer Inc. Site Development
- the (only) way forward in Technology
Author : Tarence
Date : Dec 03, 2014
2. Topics
2
o FEP (Front End Performance)
o Akamai – FEO (Front End Optimization)
o Diff between CDN and Cloud Computing
o Internet of Things (IoT) and Big Data
o DVM/ Recommendations(Promotions)
o Tealeaf
o Tag Management System (TMS) - Ensighten
o Git / SVN
o Adaptive Vs Responsive
o Mobile App Vs Mobile Web
o CMS (Content Management System) -
Teamsite/Wordpress/CQ
o MVC in JS - JS framework Concept
o CSS Preprocessors - SASS/LESS
o Task Runner - Grunt
o CCMP – Cross Channel Marketing Platform (Experian)
o Flat file Vs Relational Database
o NoSQL/ Map Reduce / HDFS - Cassandra
o SEO / SPA considerations
o Text Editors - Sublime Text/Brackets / Vim/Atom
3. FEP (Front End Performance)
3
Adding half a second to a search results
page can decrease traffic and ad revenues
by 20 percent, according to a Google study.
The same article reports Amazon found that
every additional 100 milliseconds of load
time decreased sales by 1 percent. Users
expect pages to load in two seconds—and
after three seconds, up to 40 percent will
simply leave.
5. FEP (Front End Performance)
5
Rule 1 - Make Fewer HTTP Requests
Rule 2 - Use a Content Delivery Network
Rule 3 - Add an Expires Header
Rule 4 - Gzip Components
Rule 5 - Put Stylesheets at the Top
Rule 6 - Put Scripts at the Bottom
Rule 7 - Avoid CSS Expressions
Rule 8 - Make JavaScript and CSS External
Rule 9 - Reduce DNS Lookups
Rule 10 - Minify JavaScript
Rule 11 - Avoid Redirects
Rule 12 - Remove Duplicate Scripts
Rule 13 - Configure Etags ()
Rule 14 - Make AJAX Cacheable
GURUS
Steve Souders
Steve is Chief Performance Officer at Fastly developing web performance services.
He previously served as Google's Head Performance Engineer and Chief
Performance Yahoo!. Prior to that Steve worked at General Magic, WhoWhere?,
and Lycos, and co-founded Helix Systems and CoolSync.
Steve is the creator of many performance tools and services including YSlow, the
HTTP Archive, Cuzillion, Jdrop, SpriteMe, ControlJS, and Browserscope.
Addy Osmani
An engineer at Google working with the Chrome team to build tools to help
improve developer productivity and satisfaction.
He’s also written 'Developing Backbone.js Applications' and 'Learning JavaScript
Design Patterns.
Nicholas C. Zakas
Nicholas C. Zakas is a front-end engineer, author, and speaker. He currently works
at Box making the web application awesome.
Prior to that, he worked at Yahoo! for almost five years, where he was front-end
tech lead for the Yahoo! homepage and a contributor to the YUI library. He is the
author of Maintainable JavaScript (O’Reilly, 2012), Professional JavaScript for Web
Developers (Wrox, 2012), High Performance JavaScript.
6. FEP Dev Tools
6
Compuware/ Dynatrace AJAX Edition:
Help debug and compare performance
to each JS function level – as to the
cause of the script delays.
7. FEP Dev Tools
7
Chrome SPEED TRACER :
This Chrome plugin is very
important to get a second opinion
of what the above tool does and
this also helps us zero in on the
exact reason for page delays on
the browser.
8. FEP Dev Tools
8
Chrome PROFILER/TIMELINE features in Developer Toolbar: These inbuilt Dev toolbar features would assist in
verifying for memory leaks leading to ‘bloating’ (unbinding events for observers etc), handling DOM reflows when
a View or Collection is being rendered on a page etc.
10. FEP - Third Party tools
10
COMPUWARE Ajax Edition (earlier Dynatrace): Browser based testing – Last mile and Backbone testing
GOMEZ Networks: Now acquired by Compuware (Last Mile)
KEYNOTE : Leading Backbone testing framework
12. Diff between CDN and Cloud Computing
The major difference is that cloud computing is a big group of servers in one data center building which is
usually at one location.
On the other hand CDN (Content Delivery Network) is also group of servers but distributed around the
country so it allows web visitors a better and faster access to the website.
CDN
15. Internet of Things (IoT) and Big Data
15
Decimal
Value Metric
1000 kB kilobyte
1000
2
MB megabyte
1000
3
GB gigabyte
1000
4
TB terabyte
1000
5
PB petabyte
1000
6
EB exabyte
1000
7
ZB zettabyte
1000
8
YB yottabyte
Facebook warehouse stores upwards of 300 PB
of Hive data, with an incoming daily rate of
about 600 TB. [2012]
With 50TB of machine-generated data produced
daily and the need to process 100PB of data all
together, eBay's data challenge is truly
astronomical. [2012]
In 2008, Google was processing 20,000 terabytes
of data (20 petabytes) a day.
Akamai analyzes 75 million events per day to
better target advertisements.
16. Internet of Things (IoT) and Big Data
16
The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices
within the existing Internet infrastructure.
Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond
machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications.
In 2005, Bill Joy, the
inventor of Berkeley
Unix, the founder of
Sun Microsystems,
described a taxonomy
for the Internet. He
called it “the six
Webs.”
1. The Near Web: This is the Internet that you see when you lean over a screen - like a laptop.
2. The Here Web. This is the Internet that is always with you because you accesses it through a
device you always carry - like a cell phone.
3. The Far Web. This is the Internet you see when you sit back from a big screen - like a
television or a kiosk.
4. The Weird Web. This is the Internet you access through your voice and which you listen to -
say when you are in your car, or when you talk to an intelligent system on your phone, or when
you ask your camera a question. Joy concedes that this Web does not yet fully exist.
5. B2B. This is an Internet which does not possess a consumer interface, where business
machines talk to other business machines. It is chatter of corporations amongst themselves
when they do not care about their human drones.
6. D2D. This is the Internet of sensors deployed in meshes networks, adjusting urban systems
for maximum efficiency. This Web also does not yet exist. Joy says that it will embed machine
intelligence in ordinary, daily life.
19. Big Data
19
Rupert Tagnipes (dashbay.com) explains it well -
First, it’s important to define what Big Data is.
I refer to Big Data to mean the data itself – although it is often used interchangeably with the solutions
(such as Hadoop).
I believe that data should satisfy 3 criteria before being considered “Big Data”:
Volume – the amount of data has to be large, in petabytes not just gigabytes
Velocity – the data has to be frequent, daily or even real-time
Structure – the data is typically but not always unstructured (like videos, tweets, chats)
20. Big Data
20
Big data is an all-encompassing term for any collection of data sets so large and complex that it becomes
difficult to process using traditional data processing applications.
The challenges include analysis, capture, curation, search, sharing, storage, transfer, visualization, and
privacy violations. The trend to larger data sets is due to the additional information derivable from analysis
of a single large set of related data, as compared to separate smaller sets with the same total amount of
data, allowing correlations to be found to "spot business trends, prevent diseases, combat crime and so on.“
Big data can also be defined as "Big data is a large volume unstructured data which can not be handled by
standard database management systems like DBMS, RDBMS or ORDBMS".
Cloudera Inc. is an American-based software company that provides Apache Hadoop-
based software, support and services, and training to business customers.
Apache Hadoop is an open-source software framework for distributed storage and
distributed processing of Big Data on clusters of commodity hardware.
Its Hadoop Distributed File System (HDFS) splits files into large blocks (default 64MB or
128MB) and distributes the blocks amongst the nodes in the cluster.
SAP’s Big Data services and consulting experts can help you transform your IT
infrastructure and implement Big Data technologies that let you capture, store, and
leverage data-driven insights in real time.
22. Tealeaf
22
For example,
Tealeaf can identify what campaigns or
interactions triggered a customer
session to end prematurely and result in
a non-conversion. Another example of
Tealeaf’s technology in action is an
online travel agency finding that when
visitors misspell a vacation package
name and receive zero search results,
nearly 100 percent of the visitors leave
the site without completing a booking.
Tealeaf’s software captures and records what each customer is doing and seeing
in real-time on every page and across all site visits, down to the page-by-page,
browser-level experience.
By capturing every single customer’s visit, as well as the reaction of the site in
response to the customer’s requests, Tealeaf captures both the quantitative and
qualitative details of every single interaction. This data is then used towards
optimizing the customer experience.
28. Git / SVN
28
1. GIT is distributed, SVN is not:
GIT like SVN do have centralized repository or server. But, GIT is more intended to be used in distributed mode
which means, every developers checking out code from central repository/server will have their own cloned
repository installed on their machine. Let’s say if you’re stuck somewhere where you don’t have network
connectivity, like inside the flight, elevator etc., you’ll still be able to commit files, create branches etc.
2. GIT stores content as metadata, SVN stores just files:
Every source control systems stores the metadata of files in hidden folders like .svn, .cvs etc., whereas GIT stores
entire content inside the .git folder. The .git folder is the cloned repository in your machine, it has everything
that the central repository has like tags, branches, version histories etc.
3.GIT branches are not the same as SVN branches:
Branches in SVN are nothing but just another folder in the repository. The chance of adding up orphan branches
is pretty big. Whereas, with GIT you can quickly switch between branches from the same working directory. It
helps finding un-merged branches and also help merging files fairly easily & quickly.
4.GIT does not have a global revision number like SVN do:
We can use GIT’s SHA-1 hash key to uniquely identify the code snapshot. It may not exactly replace SVN’s easily
readable numeric revision no. but, it kind of serves the same purpose.
5.GIT’s content integrity is better than SVN’s:
GIT contents are cryptographically hashed using SHA-1 hash algorithm. This ensures the robustness of code
contents by making it less prone to repository corruption due to disk failures, network issues etc.
33. MVC in JS - JS framework Concept
33
MVC (Model-View-Controller) is an
architectural design pattern that encourages
improved application organization through a
separation of concerns.
It enforces the isolation of business data
(Models) from user interfaces (Views), with a
third component (Controllers) (traditionally)
managing logic, user-input and coordinating
both the models and views.
34. MVC in JS - Comparison ?
34
http://www.slideshare.net/deepusnath/javascript-frameworks-comparison-angular-knockout-ember-and-backbone
Comparison between Angular/Knockout/Ember/Backbone on
Speed
Dependencies
Databinding
Routing
Etc.
38. MVC in JS - 2 way Binding
38
Two-way binding means that:
1. When properties in the model get updated, so does
the UI (view).
2. When UI elements get updated, the changes get
propagated back to the model.
BackboneJS doesn't have a "baked-in"
implementation of #2 (although you can certainly do
it using event listeners).
EmberJS, AngularJS and KnockoutJS have 2-way
binding abilities. http://projects.mariusgundersen.net/JSconf2013/#/overview
Which one is fastest/slowest?
• Ember is slowest when rendering lists
• Angular is slowest when the model is complex
• Knockout is slowest when pushing many items
39. MVC in JS - PushState
39
PushState/ PopState
history.pushState(null, null, link.href);
function addClicker(link) {
link.addEventListener("click", function(e) {
swapPhoto(link.href);
history.pushState(null, null, link.href);
e.preventDefault();
}, false);
}
Not suported in < ie10. Use a polyfill for fallback.
41. MVC in JS - Logic less/Embedded JS Templates
41
Embedded JavaScript Templates
These templating options allow you to embed
regular JavaScript code directly within the
template, an approach similar to ERBs.
• underscore.js
• Jade
• haml-js
• jQote2
• doT
• Stencil
• Parrot
• Eco
• EJS
• jQuery templates
• node-asyncEJS
Logic-less Templates
This group of templates follows the philosophy
that there should be little to no logic in your
templates. They do not allow arbitrary
JavaScript code in the template. Instead, you
must use the small set of constructs offered by
the templating language itself, which,
depending on the language, may include basic
loops, conditionals, and partials.
• mustache
• dust.js
• handlebars
• Google Closure Templates
• Nun
• Mu
• kite
42. MVC in JS - Full Javascript application stack
42
43. MVC in JS - Full JavaScript Application Stack
43
44. MVC in JS - Full JavaScript Application Stack
44
MongoDB is a NoSQL document-based database that uses JavaScript as its query language (but is not
written in JavaScript), thus completing our end-to-end JavaScript platform. But that’s not even the main
reason to choose this database.
Node.js is a platform for building fast and scalable network applications — that’s pretty much what the
Node.js website says. But Node.js is more than that: It’s the hottest JavaScript runtime environment around
right now, used by a ton of applications and libraries — even browser libraries are now running on Node.js.
RequireJS doesn’t only load modules with the AMD API, but it can also define dependencies and hierarchies
on your modules and let the RequireJS library load them for you. It also provides an easy way to avoid
global variable space pollution by defining all of your modules inside functions.
Grunt enables us to automate build tasks, anything including simple copying-and-pasting and concatenation
of files, template precompilation, style language (i.e. SASS and LESS) compilation, unit testing (with Mocha),
linting and code minification (for example, with UglifyJS or Closure Compiler).
MVC JS Frameworks/Templates like Angular, Backbone, Ember, Handlebar, Mustache, Underscore etc.
Code coverage is a metric for evaluating your testing. As the name implies, it tells you how much of your
code is covered by your current test suite. CoverJS measures your tests’ code coverage by instrumenting
statements (instead of lines of code like JSCoverage) in your code and generating an instrumented version of
your code. It can also generate reports to feed your Continuous Integration Server.
46. CSS Preprocessor : Why use one!
1. It adds stuff that should’ve been in CSS first place.
With it, you can start using things like variables, mixins, and
functions. It will allow you to start reusing properties and
patterns over and over, after defining them just once.
2. It will make your CSS dry (Don’t Repeat Yourself!)
.large-heading {
font-family :Helvetica, Arial;
font-weight :bold;
font-size :24px;
text-transform :uppercase;
line-height :1.2em;
color :#ccc;
}
.med-heading {
.large-heading;
font-size :18px;
}
.small-heading {
.large-heading;
font-size :14px;
}
3. It will make your CSS more organized..
h1 {
font-family:Arial,Helvetica,sans-serif;
line-height:1.2em;
a {
color:black;
&:hover {
text-decoration:none;
}
}
}
4. Makes the Code easier to maintain.
Being able to use variables, mixins, and functions means you
can define a value or group of values once at the beginning of
your document, instead of through it, making it easier to
make changes later.
@maincolor: #4575D4;
@accentcolor: #FFA700;
a {@maincolor;}
primary-nav {background-color:@accentcolor;}
OR
a { color:@maincolor; }
a:hover{ color:lighten(@maincolor, 20%;}
5. It's easy to set up.
6. It will make your websites prettier.
7. It's easier to write than you think.
8. It will save you time.
9. It will make your code easier to maintain.
10. Frameworks that supercharge your CSS.
Frameworks built on top of CSS preprocessing languages do even
more heavy lifting.
Example, Compass- a framework built on top of Sass,
automatically generates all those annoying vendor-specific CSS3
properties, has lots of useful functions for generating grids, sticky
footers, and more. It can even generate a sprite sheet for you out
of separate images.
47. CSS Preprocessor (what’s possible!)
47
Mixin is a common name used to describe that an object should copy all the properties from another object. To
sum up a mixin is nothing more than an advanced copy and paste. “All” the famous pre-processors have some
kind of mixin.
Logic Statements
.lightswitch(@colour) when (lightness(@colour) > 40%) {
color: @colour;
background-color: #000;
.box-shadow(0 3px 4px #ddd);
}
.lightswitch(@colour) when (lightness(@colour) < 41%) {
color: @colour;
background-color: #fff;
.box-shadow(0 1px 1px rgba(0,0,0,0.3));
}
Loops
.looper (@i) when (@i > 0) {
.image-class-@{i} {
background: url("../img/@{i}.png") no-repeat;
}
.looper(@i - 1);
}
.looper(0);
.looper(3);
//--------------- Output: --------------------
//.image-class-3 {
// background: url("../img/3.png") no-repeat;
//}
//.image-class-2 {
// background: url("../img/2.png") no-repeat;
//}
//.image-class-1 {
// background: url("../img/1.png") no-repeat;
//}
48. CSS Preprocessor (what can go wrong…beware!)
48
Your entire team has to be onboard with the particular variation you choose.
Because it doesn’t work without compiling, there’s no ability to live edit a site for quick changes.
And preprocessors, in the wrong hands, can make for some very ugly CSS. Because of the nested nature of
preprocessed code, it’s exceptionally easy to wind up with a CSS file with thousands of lines and properties
that have been nested 5 elements deep when just 2 would do.
COMPASS
Compass is a framework for your CSS that resolves some of the problems with the language. It also includes a
few tools to make development faster and easier:
• it provides a whole range of helper functions for images, colors, typography and more;
• it supports mathematical calculations;
• it helps ensure cross-browser compatibility.
• Cross browser CSS3 mixins (stop trying to remember all those CSS3 browser variations)
• Common typography patterns
• Common styling patterns
• Built in spriting capabilities
• Blueprint module
50. Grunt
50
WHAT IS GRUNT?
Built on top of Node.js, Grunt is a task-based command-line tool that speeds up workflows by reducing the effort
required to prepare assets for production. It does this by wrapping up jobs into tasks that are compiled
automatically as you go along. Basically, you can use Grunt on most tasks that you consider to be grunt work and
would normally have to manually configure and run yourself.
Eg., Concatenating files, running JSHint on our code, running tests, or minifying scripts etc.
55. NoSQL – HDFS (Hadoop Distributed File System)
55
Hadoop Distributed File System (HDFS) is a Java-based file system that provides scalable and reliable
data storage that is designed to span large clusters of commodity servers.
HDFS, MapReduce, and YARN form the core of Apache™ Hadoop.
56. NoSQL – Map Reduce (Technique)
56
A traditional database product would
prefer more predictable, structured
data.
A relational database may require
vertical and, sometimes horizontal
expansion of servers, to expand as
data or processing requirements grow.
An alternative, more cloud-friendly
approach is to employ NoSQL.
The load is able to easily grow by
distributing itself over lots of ordinary,
and cheap, Intel-based servers. A
NoSQL database is exactly the type of
database that can handle the sort of
unstructured, messy and
unpredictable data that our system of
engagement requires.
58. 58
Workload diversity – Big Data comes in all shapes, colors and sizes. Rigid schemas have no place here; instead you need a
more flexible design. You want your technology to fit your data, not the other way around. And you want to be able to do
more with all of that data – perform transactions in real-time, run
analytics just as fast and find anything you want in an instant from oceans of data, no matter what from that data may
take.
Scalability – With big data you want to be able to scale very rapidly and elastically, whenever and wherever you want. This
applies to all situations, whether
scaling across multiple data centers and even to the cloud if needed.
Performance – As has already been discussed, in an online world where nanosecond delays can cost you sales, Big Data
must move at extremely high velocities no matter how much you scale or what workloads your database must
perform. Performance of your environment, namely your applications, should be high on the list of requirements for
deploying a NoSQL platform.
Continuous Availability - Building off of the performance consideration, when you rely on big data to feed your essential,
revenue-generating 24/7 business applications, even high availability is not high enough. Your data can never go down,
therefore there should be no single point of failure in your NoSQL environment, thus ensuring applications are always
available.
Manageability - Operational complexity of a NoSQL platform should be kept at a minimum. Make sure that the
administration and development required to both maintain and maximize the benefits of moving to a NoSQL environment
are achievable.
Cost – This is certainly a glaring reason for making the move to a NoSQL platform as meeting even one of the
considerations presented here with relational database technology can cost become prohibitively expensive. Deploying
NoSQL properly allows for all of the benefits above while also lowering operational costs.
Strong Community - This is perhaps one of the more important factors to keep in mind as you move to a NoSQL platform.
Make sure there is a solid and capable community around the technology, as this will provide an invaluable resource for
the individuals and teams that will be managing the environment. Involvement on the part of the vendor should not
only include strong support and technical resource availability, but also consistent outreach to the user base.
Good local user groups and meetups will provide many opportunities for communicating with other individuals
and teams that will provide great insight into how to work best with the platform of choice.
NoSQL – Cassandra (Why is it preferred)
59. Flat File VS Relational Database
59
Flat File Database
A flat file database is a database designed around a single table. The flat file design puts all database
information in one table, or list, with fields to represent all parameters.
A flat file may contain many fields, often, with duplicate data that are prone to data corruption.
If you decide to merge data between two flat files, you need to copy and paste relevant information from one
file to the other.
There is no automation between flat files. If you have two or more flat files that contain client addresses, for
example, and a client moved, you would have to manually modify the address parameters in each file that
contains that client’s information. Changing information in one file has no bearing on other files.
Flat files offer the functionality to store information, manipulate fields, print or display formatted information
and exchange information with others, through email and over the Internet.
Relational Database
A relational database, on the other hand, incorporates multiple tables with methods for the tables to work
together. The relationships between table data can be collated, merged and displayed in database forms. Most
relational databases offer functionality to share data:
• Across networks
• Over the Internet
• With laptops and other electronic devices, such as palm pilots
• With other software systems
66. SEO (Search Engine Optimization)
66
Black Hat SEO White Hat SEO
Meaning Techniques used to get higher search rankings in
an unethical manner.
Conforms to search engine designs and involves
no deception.
Status Disapproved by search engines. Approved by search engines.
Result Site is eventually banned, de-indexed or penalized
through lower rankings.
Results last a long time.
Techniques Black hat SEO techniques include keyword
stuffing, doorway and cloaked pages, link farming,
hidden texts and links, blog comment spam.
White hat SEO techniques include research,
analysis, re- write meta tags to be more relevant,
content improvement and web redesign.
68. SEO in SPAs (Single Page Applications) - UAs
68
BROWSER USER AGENTS
Chrome 41.0.2228.0
Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36
Firefox 33.0
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10; rv:33.0) Gecko/20100101 Firefox/33.0
Internet Explorer 11.0
Mozilla/5.0 (compatible, MSIE 11, Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko
Safari 6.0
Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0
Mobile/10A5355d Safari/8536.25
BOT USER AGENTS
Googlebot 2.1
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Googlebot/2.1 (+http://www.googlebot.com/bot.html)
Googlebot/2.1 (+http://www.google.com/bot.html)
Yahoo! Slurp
Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)
Bingbot 2.0
Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
Mozilla/5.0 (compatible; bingbot/2.0 +http://www.bing.com/bingbot.htm)