24. OSI MODEL PROTOCOLS
HTTP, FTP,DNS, SNMP
SSL,SSH, MPEG, JPEG
NFS, RPC, NetBIOS
TCP,UDP
IPV4, IPV6, IPX, Apple Talk
PPP, ATM, CDP,Frame Relay
Coax,801.11, ISDN, DSL
25.
26. OSI MODEL PROTOCOLS
HTTP, FTP,DNS, SNMP
SSL,SSH, MPEG, JPEG
NFS, RPC, NetBIOS
TCP,UDP
IPV4, IPV6, IPX, Apple Talk
PPP, ATM, CDP,Frame Relay
Coax,801.11, ISDN, DSL
27. INTERNET PROTOCOL
• Unreliable: no attempt to recover lost,
duplicated, out-of-order packets.
• Connection-less: each packet is handled
independently.
• Best Effort: no guarantees on delivery.
Set of rules for routing and
addressing packets of data over a
network.
28. IPV4 ADDRESS
An IPv4 address is a 32-bit
address that uniquely and
universally defines the
connection of a device to the
Internet.
The address space of IPv4 is 232
(4,294,967,296 addresses)
The address space of IPv6 is
2128 (340 billion billion billion
billion addresses… not a typo
)
29. OSI MODEL PROTOCOLS
HTTP, FTP,DNS, SNMP
SSL,SSH, MPEG, JPEG
NFS, RPC, NetBIOS
TCP,UDP
IPV4, IPV6, IPX, Apple Talk
PPP, ATM, CDP,Frame Relay
Coax,801.11, ISDN, DSL
39. HTTP VERSION
HTTP/0.9
• Initial version of HTTP — a simple client-server,
request-response, telnet-friendly protocol.
• Methods supported: GET only.
• Response type: hypertext only.
• Connection nature: terminated immediately after the
response.
• No HTTP headers, No status/error codes, No URLs,
No versioning.
41. HTTP METHOD
A K A H T T P V E R B
GET: requests a representation of the
specified resource.
HEAD: same as GET, but without the response
body.
POST: submit an entity to the specified
resource, often causing a change in state or
side effects on the server.
PUT: replaces all current representations of
the target resource with the request payload.
DELETE: method deletes the specified
resource.
CONNECT: establishes a tunnel to the server
identified by the target resource.
OPTIONS: describes the communication
options for the target resource.
TRACE: performs a message loop-back test
along the path to the target resource.
PATCH: applies partial modifications to a
resource.
Indicates the action that the
HTTP request expects from the
queried server.
46. HTTP STATUS CODE
1xx (Informational): the server has received
the request and is continuing the process.
2xx (Successful): the request was successful,
and the client has received the expected
information.
3xx (Redirection): the request has been
redirected and its completion requires
further actions.
4xx (Client Error): the website or the page
could not be reached, either the page is
unavailable, or the request contains bad
syntax.
5xx (Server Error): while the request
appears to be valid, the server could not
complete the request.
3-digits code that indicates the
status of an HTTP request.
47. HTTP RESPONSE HEADERS
Key-value pairs that communicate core information, such as language
and format of the data being sent in the response body.
50. API
ARCHITECTURAL
STYLES
The choice of an architectural style
should be one of the first decisions
taken, as this is a decision that is
hard to change later!
51. WHAT IS REST?
Standard
Protocol
An architectural style, consisting of
architectural constraints and agreements,
which are based on HTTP.
52. REST
CONSTRAINTS
Use of HTTP capabilities as far as possible.
Design of resources (nouns), not methods or operations
(verbs).
Use of the uniform interface, defined by HTTP methods, which
have well-specified semantics.
Stateless communication between client and server.
Use of loose coupling and independence of the requests.
Use of HTTP return codes.
Use of media-types.
58. THE MEANING
OF LEVELS
Level 1 tackles the question of handling complexity
by using divide and conquer, breaking a large
service endpoint down into multiple resources.
Level 2 introduces a standard set of verbs so that
we handle similar situations in the same way,
removing unnecessary variation.
Level 3 introduces discoverability, providing a way
of making a protocol more self-documenting.
59. IS MY SERVICE RESTFUL?
• If the name of the service is a verb instead of a noun, the service is likely
RPC and not RESTful.
• If the name of the service to be executed is encoded in the request
body, the service is likely RPC and not RESTful.
• If the back-button in the web-application does not work as expected,
the service is not stateless and not RESTful.
• If the service or website does not behave as expected after turning
cookies off, the service is not stateless and not RESTful.
64. AUTHENTICATION
HTTP basic authentication
an HTTP Authorization header containing a base64-encoded
username:password string is passed in the request header.
API keys
a key is passed in every request in the HTTP header or on the
querystring.
OAuth
a token is obtained from an OAuth server before any request
can be made. The OAuth token is then sent with each API
request until it expires.
JSON Web Tokens (JWT)
digitally-signed authentication tokens are securely
transmitted in both the request and response header.
65. SECURITY
Use HTTPS
Use a
robust authentication
method
Use CORS to limit client-
side calls to specific
domains
Provide minimum
functionality
Validate all endpoint URLs
and body data
Avoid exposing API
tokens in client-side
JavaScript
Block access from
unknown domains or IP
addresses
Block unexpectedly large
payloads
Consider rate limiting
Respond with an
appropriate HTTP status
code and caching header
Log requests and
investigate failures.
An API (Application Programming Interface) is a software-to-software interface that enables two applications to exchange data among each other. Though this might sound a little boring, they are used a lot in the real world to create some amazing applications. One particularly key role that APIs will be playing, in the future, is to connect to The Internet of Things.
The Open Systems Interconnection (OSI) model is a conceptual model created by the International Organization for Standardization which enables diverse communication systems to communicate using standard protocols.
The OSI model can be seen as a universal language for computer networking. It’s based on the concept of splitting up a communication system into seven abstract layers, each one stacked upon the last.
Each layer of the OSI model handles a specific job and communicates with the layers above and below itself.
This is the only layer that directly interacts with data from the user. Software applications like web browsers and email clients rely on the application layer to initiate communications. But it should be made clear that client software applications are not part of the application layer; rather the application layer is responsible for the protocols and data manipulation that the software relies on to present meaningful data to the user. Application layer protocols include HTTP as well as SMTP (Simple Mail Transfer Protocol is one of the protocols that enables email communications).
This layer is primarily responsible for preparing data so that it can be used by the application layer; in other words, layer 6 makes the data presentable for applications to consume. The presentation layer is responsible for translation, encryption, and compression of data.
Two communicating devices communicating may be using different encoding methods, so layer 6 is responsible for translating incoming data into a syntax that the application layer of the receiving device can understand.
If the devices are communicating over an encrypted connection, layer 6 is responsible for adding the encryption on the sender’s end as well as decoding the encryption on the receiver's end so that it can present the application layer with unencrypted, readable data.
Finally the presentation layer is also responsible for compressing data it receives from the application layer before delivering it to layer 5. This helps improve the speed and efficiency of communication by minimizing the amount of data that will be transferred.
This is the layer responsible for opening and closing communication between the two devices. The time between when the communication is opened and closed is known as the session. The session layer ensures that the session stays open long enough to transfer all the data being exchanged, and then promptly closes the session in order to avoid wasting resources.
The session layer also synchronizes data transfer with checkpoints. For example, if a 100 megabyte file is being transferred, the session layer could set a checkpoint every 5 megabytes. In the case of a disconnect or a crash after 52 megabytes have been transferred, the session could be resumed from the last checkpoint, meaning only 50 more megabytes of data need to be transferred. Without the checkpoints, the entire transfer would have to begin again from scratch.
Layer 4 is responsible for end-to-end communication between the two devices. This includes taking data from the session layer and breaking it up into chunks called segments before sending it to layer 3. The transport layer on the receiving device is responsible for reassembling the segments into data the session layer can consume.
The transport layer is also responsible for flow control and error control. Flow control determines an optimal speed of transmission to ensure that a sender with a fast connection doesn’t overwhelm a receiver with a slow connection. The transport layer performs error control on the receiving end by ensuring that the data received is complete, and requesting a retransmission if it isn’t.
The network layer is responsible for facilitating data transfer between two different networks. If the two devices communicating are on the same network, then the network layer is unnecessary. The network layer breaks up segments from the transport layer into smaller units, called packets, on the sender’s device, and reassembling these packets on the receiving device. The network layer also finds the best physical path for the data to reach its destination; this is known as routing.
The data link layer is very similar to the network layer, except the data link layer facilitates data transfer between two devices on the SAME network. The data link layer takes packets from the network layer and breaks them into smaller pieces called frames. Like the network layer, the data link layer is also responsible for flow control and error control in intra-network communication (The transport layer only does flow control and error control for inter-network communications).
This layer includes the physical equipment involved in the data transfer, such as the cables and switches. This is also the layer where the data gets converted into a bit stream, which is a string of 1s and 0s. The physical layer of both devices must also agree on a signal convention so that the 1s can be distinguished from the 0s on both devices.
Each protocol creates a protocol data unit (PDU) for transmission that includes headers required by that protocol and data to be transmitted. This data becomes the service data unit (SDU) of the next layer below it. This diagram shows a layer 7 PDU consisting of a layer 7 header (“L7H”) and application data. When this is passed to layer 6, it becomes a layer 6 SDU. The layer 6 protocol prepends to it a layer 6 header (“L6H”) to create a layer 6 PDU, which is passed to layer 5. The encapsulation process continues all the way down to layer 2, which creates a layer 2 PDU—in this case shown with both a header and a footer—that is converted to bits and sent at layer 1.
OSI (Open Systems Interconnection) model was created by the International Organization for Standardization (ISO), an international standard-setting body. It was designed to be a reference model for describing the functions of a communication system. The OSI model provides a framework for creating and implementing networking standards and devices and describes how network applications on different computers can communicate through the network media. Most people learn the mnemonic „Please Do Not Throw Sausage Pizza Away“:
Data traversing the Internet is divided into smaller pieces, called packets. IP information is attached to each packet, and this information helps routers to send packets to the right place.
UDP is a communication protocol used across the Internet for especially time-sensitive transmissions such as video playback or DNS lookups. It speeds up communications by not requiring what’s known as a “handshake”, allowing data to be transferred before the receiving party agrees to the communication. This allows the protocol to operate very quickly, and also creates an opening for exploitation.
A TCP connection, which is used commonly used for loading web page content, requires a handshake in which the receiver agrees to the communication before the data is sent. UDP will send data without confirmation, even if the request is fraudulent.
UDP doesn’t have the error checking and ordering functionality of TCP and is best utilized when error checking is not needed and speed is important. This built-in lack of reliability is why UDP is sometimes referred to as ‘Unreliable Datagram Protocol’.
Introduced in 1980, UDP is among the oldest network protocols still in use. Applications that utilize UDP must be able to tolerate errors, loss, and duplication. While this sounds less than ideal, there are several applications where a faster and less reliable protocol is the best choice.
For example, when an email is sent over TCP, a connection is established and a 3-way handshake is made. First, the source send an SYN “initial request” packet to the target server in order to start the dialogue. Then the target server then sends a SYN-ACK packet to agree to the process. Lastly, the source sends an ACK packet to the target to confirm the process, after which the message contents can be sent. The email message is ultimately broken down into packets before each packet is sent out into the Internet, where it traverses a series of gateways before arriving at the target device where the group of packets are reassembled by TCP into the original contents of the email.
Voice and video traffic are sent using this protocol because they are both time-sensitive and designed to handle some level of loss. For example VOIP (voice over IP), which is used by many internet-based telephone services, operates over UDP. This is because a staticy phone conversation is preferable to one that is crystal clear but heavily delayed. This also makes UDP the ideal protocol for online gaming. Similarly, because DNS and NTP servers both need to be fast and efficient, they operate though UDP
What’s in an HTTP request?
An HTTP request is the way internet communications platforms such as web browsers ask for the information they need to load a website.
Each HTTP request made across the Internet carries with it a series of encoded data that carries different types of information. A typical HTTP request contains:
HTTP version type
a URL
an HTTP method
HTTP request headers
Optional HTTP body.
Let’s explore in greater depth how these requests work, and how the contents of a request can be used to share information.
Invented by Tim Berners-Lee at CERN in the years 1989–1991, HTTP (Hypertext Transfer Protocol) is the underlying communication protocol of World Wide Web. HTTP functions as a request–response protocol in the client–server computing model. HTTP standards are developed by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). HTTP has four versions — HTTP/0.9, HTTP/1.0, HTTP/1.1, and HTTP/2.0. Today the version in common use is HTTP/1.1 and the future will be HTTP/2.0.
Establishing a new connection for each request — major problem in both HTTP/0.9 and HTTP/1.0
Both HTTP/0.9 and HTTP/1.0 required to open up a new connection for each request (and close it immediately after the response was sent). Each time a new connection establishes, a TCP three-way handshake should also occur. For better performance, it was crucial to reduce these round-trips between client and server. HTTP/1.1 solved this with persistent connections.
Although HTTPS is secure by its design, the SSL/TLS handshake process consumes a significant time before establishing an HTTPS connection. It normally costs 1–2 seconds and drastically slows down the startup performance of a website.
What is URI
URI stands for Uniform Resource Identifier. URI is a text which is used to identify any resource or name on Internet. URI has two specializations in the form of URL (Uniform Resource Locator) and URN (Uniform Resource Name) to identify resource and name. We mostly see examples of URL and URN in the real word
What is URL
URL standards for Uniform resource locator and it is a subset of URI or Uniform Resource Identifier. URL includes location as well as the protocol to retrieve the resource e.g. in http://java67.blogspot.sg/2012/09/what-is-new-in-java-7-top-5-jdk-7.html, HTTP is a protocol which can be used to retrieve resource what-is-new-in-java-7-top-5-jdk-7.html available in location http://java67.blogspot.com directory. It's not necessary that URL always include HTTP as protocol, it can use any protocol e.g. ftp://, https:// or ldap://.
What is URN
URN stands for Uniform Resource Name. URN is also the subset of URI. One of the best examples of URN is ISBN number which is used to uniquely identify a book. URN is completely different than URL as it doesn't include any protocol.
HTTP has four standard-usage verbs: POST, GET, PUT, and DELETE. They do not correspond to CRUD (Create, Read, Update, Delete). Forget about the distinction between create and update; it won't help you here. Both POST and PUT can be used for create and update operations in different situations. So what exactly is the difference between PUT and POST?
In a nutshell: use PUT if and only if you know both the URL where the resource will live, and the entirety of the contents of the resource. Otherwise, use POST. POST is an incredibly general verb. Because it promises neither safety nor idempotence, and it has a relatively loosely-worded description in the RFC, you can use it for pretty much anything. In fact, you could make all of your requests POST requests because POST makes very few promises; it can behave like a GET, a PUT, or a DELETE if it wants to. It also can do some things that no other verb can do - it can create a new resource at a URL different from the URL in the HTTP request; and it can modify part of a resource without changing the whole thing (although the proposed but not widely-accepted PATCH method can do something similar).
PUT is a much more restrictive verb. It takes a complete resource and stores it at the given URL. If there was a resource there previously, it is replaced; if not, a new one is created. These properties support idempotence, which a naive create or update operation might not. I suspect this may be why PUT is defined the way it is; it's an idempotent operation which allows the client to send information to the server.
Very often, POST is used for creation because the server is responsible for assigning URLs to resources. As an example, a forum post is likely to be POSTed because the server must assign it a unique URL. If PUT were used, it would force clients to choose URLs for forum posts, and there would be no arbiter to prevent collisions when two clients chose the same URL.
Very often, PUT is used for update because the resource already has a URL which the client knows about. The client just has to supply a modified version of the resource.
Very often, PUT is used for update because the resource already has a URL which the client knows about. The client just has to supply a modified version of the resource.
Sometimes, PUT is used for creation. Generally this will be in a level 2 richardson maturity model situation, where the client knows about the structure of URLs and how to create them. For example, if I know a server has a URL scheme where users live at http://example.com/users/username, I could create myself a user account by doing a PUT to http://example.com/users/rhebus, because I already know what my desired username is, and therefore, which URL my user account will live at. [In theory, PUT could also be used for creation at level 3 richardson, where the server tells the client about URLs where resources may be created. If anyone has experienced this situation, I would love to hear about it.]
Sometimes, POST is used for update because only part of the resource is being updated. PUT requires a complete resource; but the client may not know the full contents of the resource or the client may not wish to send the full contents of the resource down the wire. (This is the use-case that PATCH would cover.) For example, a client may wish to append to a log file on the server, without caring about the existing contents of the file.
[Source: http://www.philandstuff.com/2011/04/14/put-vs-post.html]
The “xx” refers to different numbers between 00 and 99.
Status codes starting with the number ‘2’ indicate a success. For example, after a client requests a web page, the most commonly seen responses have a status code of ‘200 OK’, indicating that the request was properly completed.
If the response starts with a ‘4’ or a ‘5’ that means there was an error and the webpage will not be displayed. A status code that begins with a ‘4’ indicates a client-side error (It’s very common to encounter a ‘404 NOT FOUND’ status code when making a typo in a URL). A status code beginning in ‘5’ means something went wrong on the server side. Status codes can also begin with a ‘1’ or a ‘3’, which indicate an informational response and a redirect, respectively.
In general, an architectural style is a large-scale, predefined solution structure. There are architectural styles for pretty much anything, for example for building houses and for building APIs. Using an architectural style helps us to design the solution quicker than designing everything from scratch. Architectural styles are similar to patterns, but provide a solution for a larger challenge. The choice of an architectural style should be one of the first decisions taken, as this is a decision that is hard to change later.
The answer to which style to implement is the usual “it depends.” A distributed large scale system may benefit from REST while a smaller monolithic one does not. MVC systems with a basic CRUD can benefit from RPC as long as there is little need to scale.
When choosing either approach or style it is key to know the differences. There is no right or wrong here. What is more important, is to know which approach solves for the job at hand.
REST defines a number of constraints for API design. Many of the REST constraints are actually HTTP constraints, and REST leverages these HTTP constraints for APIs. The REST style ensures that APIs use HTTP correctly. These constraints limit the freedom of design, so not every design is allowed anymore. Succumbing to constraints sounds like a loss, but actually, it ensures that we get many desirable properties for free that we do not have to think about anymore. REST imposes the following constraints:
The REST constraints tell us to design APIs according to HATEAOS (Hypertext as the Engine of Application State). The Richardson Maturity Index rates APIs according to the fulfillment of these constraints and assigns the highest rating (level 3) to the proper implementation of the HATEOAS ideas.
In short, HATEOAS APIs should return not only data to the caller, but also metadata about how to interact with the data on a semantic level. The semantic level is important: Just giving information about CRUD operations is in general not sufficient (those are already defined by HTTP) – it should be intera
HATEOAS is all about constructing the response of the API: Which information to put in and which information to link.ctions on a “business level”. There’s no absolute standard as to how to represent hypermedia controls. Spotify has chosen one that works for them. Others may use ATOM links, or the Hypertext Application Language (HAL).
Let's assume I want to book an appointment with my doctor. My appointment software first needs to know what open slots my doctor has on a given date, so it makes a request of the hospital appointment system to obtain that information. In a level 0 scenario, the hospital will expose a service endpoint at some URI. I then post to that endpoint a document containing the details of my request.
So far this is a straightforward RPC style system. It's simple as it's just slinging plain old XML (POX) back and forth. If you use SOAP or XML-RPC it's basically the same mechanism, the only difference is that you wrap the XML messages in some kind of envelope.
The first step towards the Glory of Rest in the RMM is to introduce resources. So now rather than making all our requests to a singular service endpoint, we now start talking to individual resources.
“The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.” - Roy Fielding’s dissertation.
The difference now is that if anyone needs to do anything about the appointment, like book some tests, they first get hold of the appointment resource, which might have a URI like http://royalhope.nhs.uk/slots/1234/appointment, and post to that resource.
To an object guy like me this is like the notion of object identity. Rather than calling some function in the ether and passing arguments, we call a method on one particular object providing arguments for the other information.
I've used HTTP POST verbs for all my interactions here in level 0 and 1, but some people use GETs instead or in addition. At these levels it doesn't make much difference, they are both being used as tunneling mechanisms allowing you to tunnel your interactions through HTTP. Level 2 moves away from this, using the HTTP verbs as closely as possible to how they are used in HTTP itself. At Level 2, the use of GET for a request like this is crucial. HTTP defines GET as a safe operation, that is it doesn't make any significant changes to the state of anything. This allows us to invoke GETs safely any number of times in any order and get the same results each time. An important consequence of this is that it allows any participant in the routing of requests to use caching, which is a key element in making the web perform as well as it does. HTTP includes various measures to support caching, which can be used by all participants in the communication. By following the rules of HTTP we're able to take advantage of that capability.
Even if I use the same post as level 1, there's another significant difference in how the remote service responds. If all goes well, the service replies with a response code of 201 to indicate that there's a new resource in the world.
To book an appointment we need an HTTP verb that does change state, a POST or a PUT. I'll use the same POST that I did earlier.
The 201 response includes a location attribute with a URI that the client can use to GET the current state of that resource in the future. The response here also includes a representation of that resource to save the client an extra call right now.
The important part of this response is the use of an HTTP response code to indicate something has gone wrong. In this case a 409 seems a good choice to indicate that someone else has already updated the resource in an incompatible way. Rather than using a return code of 200 but including an error response, at level 2 we explicitly use some kind of error response like this. It's up to the protocol designer to decide what codes to use, but there should be a non-2xx response if an error crops up. Level 2 introduces using HTTP verbs and HTTP response codes.
There is an inconsistency creeping in here. REST advocates talk about using all the HTTP verbs. They also justify their approach by saying that REST is attempting to learn from the practical success of the web. But the world-wide web doesn't use PUT or DELETE much in practice. There are sensible reasons for using PUT and DELETE more, but the existence proof of the web isn't one of them.
The key elements that are supported by the existence of the web are the strong separation between safe (eg GET) and non-safe operations, together with using status codes to help communicate the kinds of errors you run into.
The final level introduces something that you often hear referred to under the ugly acronym of HATEOAS (Hypertext As The Engine Of Application State). It addresses the question of how to get from a list open slots to knowing what to do to book an appointment.
We begin with the same initial GET that we sent in level 2. But the response has a new element. Each slot now has a link element which contains a URI to tell us how to book an appointment.
The point of hypermedia controls is that they tell us what we can do next, and the URI of the resource we need to manipulate to do it. Rather than us having to know where to post our appointment request, the hypermedia controls in the response tell us how to do it.
The POST would again copy that of level 2. And the reply contains a number of hypermedia controls for different things to do next. One obvious benefit of hypermedia controls is that it allows the server to change its URI scheme without breaking clients. As long as clients look up the "addTest" link URI then the server team can juggle all URIs other than the initial entry points.
A further benefit is that it helps client developers explore the protocol. The links give client developers a hint as to what may be possible next. It doesn't give all the information: both the "self" and "cancel" controls point to the same URI - they need to figure out that one is a GET and the other a DELETE. But at least it gives them a starting point as to what to think about for more information and to look for a similar URI in the protocol documentation.
Similarly it allows the server team to advertise new capabilities by putting new links in the responses. If the client developers are keeping an eye out for unknown links these links can be a trigger for further exploration.
There's no absolute standard as to how to represent hypermedia controls.
I should stress that the RMM, while a good way to think about what the elements of REST, is not a definition of levels of REST itself. Roy Fielding has made it clear that level 3 RMM is a pre-condition of REST. Like many terms in software, REST gets lots of definitions, but since Roy Fielding coined the term, his definition should carry more weight than most.
What I find useful about this RMM is that it provides a good step by step way to understand the basic ideas behind restful thinking. As such I see it as tool to help us learn about the concepts and not something that should be used in some kind of assessment mechanism. I don't think we have enough examples yet to be really sure that the restful approach is the right way to integrate systems, I do think it's a very attractive approach and the one that I would recommend in most situations.
The result is a model that helps us think about the kind of HTTP service we want to provide and frame the expectations of people looking to interact with it.
All are valid options to fetch data for user 123. The number of combinations increase further when you have more complex operations. For example, return ten users whose surnames start with ‘A’ and work for companyX starting at record 51 when ordered by date of birth in reverse chronological order.
Ultimately, it doesn’t matter how you format URLs, but consistency across your API is important. That can be difficult to achieve on large codebases with many developers.
Software evolves, APIs must be versioned
URLs suck because they should represent the entity: I actually kind agree with this insofar as the entity I’m retrieving is an account, not a version of the account. Semantically, it’s not really correct but damn it’s easy to use!
Custom request headers suck because it’s not really a semantic way of describing the resource: The HTTP spec gives us a means of requesting the nature we’d like the resource represented in by way of the accept header, why reproduce this?
Accept headers suck because they’re harder to test I can no longer just give someone a URL and say “Here, click this”, rather they have to carefully construct the request and configure the accept header appropriately.
More info at -> https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/
API authentication will vary depending on the use context. In some cases, the third-party application is considered to be another logged-in user with specific rights and permissions — for example, when generating directions from a map API. In other cases, the third-party application is being used by a registered user and can only access their data — for example, when fetching email content or documents.
N API requests must be made for each result in the parent request.
If this is a common use case, the RESTful API could be changed so that every returned book contained the full author details such as their name, age, country, biography, and so on. It could also provide full details of their other books — although this would considerably increase the response payload!
To avoid massive responses, the API could be adjusted so author details can be controlled — for example, ?author_details=basic — but the number of options can quickly become bewildering.