RS-485, also known as TIA-485(-A) or EIA-485, is a standard defining the electrical characteristics of drivers and receivers for use in serial communications systems. Electrical signaling is balanced, and multipoint systems are supported. The standard is jointly published by the Telecommunications Industry Association and Electronic Industries Alliance (TIA/EIA). Digital communications networks implementing the standard can be used effectively over long distances and in electrically noisy environments. Multiple receivers may be connected to such a network in a linear, multidrop bus. These characteristics make RS-485 useful in industrial control systems and similar applications.
RS-485 supports inexpensive local networks and multidrop communications links, using the same differential signaling over twisted pair as RS-422. It is generally accepted that RS-485 can be used with data rates up to 10 Mbit/s or, at lower speeds, distances up to 1,200 m (4,000 ft). As a rule of thumb, the speed in bit/s multiplied by the length in metres should not exceed 108. Thus a 50-meter cable should not signal faster than 2 Mbit/s.
In contrast to RS-422, which has a driver circuit which cannot be switched off, RS-485 drivers use three-state logic allowing individual transmitters to be deactivated. This allows RS-485 to implement linear bus topologies using only two wires. The equipment located along a set of RS-485 wires are interchangeably called nodes, stations or devices. The recommended arrangement of the wires is as a connected series of point-to-point (multidropped) nodes, i.e. a line or bus, not a star, ring, or multiply connected network. Star and ring topologies are not recommended because of signal reflections or excessively low or high termination impedance. If a star configuration is unavoidable, special RS-485 repeaters are available which bidirectionally listen for data on each span and then retransmit the data onto all other spans.
Typical bias network together with termination. Biasing and termination values are not specified in the RS-485 standard.

Ideally, the two ends of the cable will have a termination resistor connected across the two wires. Without termination resistors, signal reflections off the unterminated end of the cable can cause data corruption. Termination resistors also reduce electrical noise sensitivity due to the lower impedance.[further explanation needed] The value of each termination resistor should be equal to the cable characteristic impedance (typically, 120 ohms for twisted pairs). The termination also includes pull up and pull down resistors to establish fail-safe bias for each data wire for the case when the lines are not being driven by any device. This way, the lines will be biased to known voltages and nodes will not interpret the noise from undriven lines as actual data; without biasing resistors, the data lines float in such a way that electrical noise sensitivity is greatest when all device stations are silent or unpowered.
The EIA once labeled all its standards with the prefix "RS" (Recommended Standard), but the EIA-TIA officially replaced "RS" with "EIA/TIA" to help identify the origin of its standards. The EIA has officially disbanded and the standard is now maintained by the TIA as TIA-485, but engineers and applications guides continue to use the RS-485 designation. The initial edition of EIA RS-485 was dated April 1983.
RS-485 only specifies the electrical characteristics of the generator and the receiver: the physical layer. It does not specify or recommend any communications protocol; Other standards define the protocols for communication over an RS-485 link. The foreword to the standard references The Telecommunications Systems Bulletin TSB-89 which contains application guidelines, including data signaling rate vs. cable length, stub length, and configurations.
Section 4 defines the electrical characteristics of the generator (transmitter or driver), receiver, transceiver, and system. These characteristics include: definition of a unit load, voltage ranges, open-circuit voltages, thresholds, and transient tolerance. It also defines three generator interface points (signal lines); A, B and C. The data is transmitted on A and B. C is a ground reference. This section also defines the logic states 1 (off) and 0 (on), by the polarity between A and B terminals. If A is negative with respect to B, the state is binary 1. The reversed polarity (A +, B −) is binary 0. The standard does not assign any logic function to the two states.
RS-485, like RS-422, can be made full-duplex by using four wires. Since RS-485 is a multi-point specification, however, this is not necessary or desirable in many cases. RS-485 and RS-422 can interoperate with certain restrictions.
Converters between RS-485 and RS-232 are available to allow a personal computer to communicate with remote devices. By using repeaters very large RS-485 networks can be formed. TSB-89A, Application Guidelines for TIA/EIA-485-A does not recommend using star topology.
RS-485 signals are used in a wide range of computer and automation systems. In a computer system, SCSI-2 and SCSI-3 may use this specification to implement the physical layer for data transmission between a controller and a disk drive. RS-485 is used for low-speed data communications in commercial aircraft cabins' vehicle bus. It requires minimal wiring and can share the wiring among several seats, reducing weight.
These are used in programmable logic controllers and on factory floors. RS-485 is used as the physical layer underlying many standard and proprietary automation protocols used to implement industrial control systems, including the most common versions of Modbus and Profibus. DH 485 is a proprietary communications protocol used by Allen-Bradley in their line of industrial control units. Utilizing a series of dedicated interface devices, it allows PCs and industrial controllers to communicate. Since it is differential, it resists electromagnetic interference from motors and welding equipment.
In theatre and performance venues, RS-485 networks are used to control lighting and other systems using the DMX512 protocol. RS-485 serves as a physical layer for the AES3 digital audio interconnect.
RS-485 is also used in building automation as the simple bus wiring and long cable length is ideal for joining remote devices. It may be used to control video surveillance systems or to interconnect security control panels and devices such as access control card readers.
It is also used in Digital Command Control (DCC) for model railways. The external interface to the DCC command station is often RS-485 used by hand-held controllers or for controlling the layout in a networked PC environment. 8P8C modular connectors are used in this case.
RS-485 does not define a communication protocol; merely an electrical interface. Although many applications use RS-485 signal levels, the speed, format, and protocol of the data transmission are not specified by RS-485. Interoperability of even similar devices from different manufacturers is not assured by compliance with the signal levels alone.

The RS-485 differential line consists of two signals:
A, which is low for logic 1 and high for logic 0 and,
B, which is high for logic 1 and low for logic 0.
Because a mark (logic 1) condition is traditionally represented (e.g. in RS-232) with a negative voltage and space (logic 0) represented with a positive one, A may be considered the non-inverting signal and B as inverting. The RS-485 standard states (paraphrased):
- For an off, mark or logic 1 state, the driver's A terminal is negative relative to the B terminal.
- For an on, space or logic 0 state, the driver's A terminal is positive relative to the B terminal.
The truth tables of most popular devices, starting with the SN75176, show the output signals inverted. This is in accordance with the A/B naming used, incorrectly, by most differential transceiver manufacturers, including:
- Intersil, as seen in their data sheet for the ISL4489 transceiver
- Maxim, as seen in their data sheet for the MAX483 transceiver and for the new generation 3.3v micro controller the MAX3485
- Linear Technology, as seen in their datasheet for the LTC2850, LTC2851, LTC2852
- Analog Devices, as seen in their datasheet for the ADM3483, ADM3485, ADM3488, ADM3490, ADM3491
- FTDI, as seen in their datasheet for the USB-RS485-WE-1800-BT
These manufacturers are all incorrect (but consistent), and their practice is in widespread use. The issue also exists in programmable logic controller applications. Care must be taken when using A/B naming. Alternate nomenclature is often used to avoid confusion surrounding the A/B naming:
- TX+/RX+ or D+ as alternative for B (high for mark i.e. idle)

- TX−/RX− or D− as alternative for A (low for mark i.e. idle)
RS-485 standard conformant drivers provide a differential output of a minimum 1.5 V across a 54-Ω load, whereas standard conformant receivers detect a differential input down to 200 mV. The two values provide a sufficient margin for a reliable data transmission even under severe signal degradation across the cable and connectors. This robustness is the main reason why RS-485 is well suited for long-distance networking in noisy environment.
In addition to the A and B connections, an optional, third connection may be present (the TIA standard requires the presence of a common return path between all circuit grounds along the balanced line for proper operation) called SC, G or reference, the common signal reference ground used by the receiver to measure the A and B voltages. This connection may be used to limit the common-mode signal that can be impressed on the receiver inputs. The allowable common-mode voltage is in the range −7 V to +12 V, i.e. ±7 V on top of the 0–5 V signal range. Failure to stay within this range will result in, at best, signal corruption, and, at worst, damage to connected devices.
Care must be taken that an SC connection, especially over long cable runs, does not result in an attempt to connect disparate grounds together – it is wise to add some current limiting to the SC connection. Grounds between buildings may vary by a small voltage, but with very low impedance and hence the possibility of catastrophic currents – enough to melt signal cables, PCB traces, and transceiver devices.
RS-485 does not specify any connector or pinout. Circuits may be terminated on screw terminals, D-subminiature connectors, or other types of connectors.
The standard does not discuss cable shielding but makes some recommendations on preferred methods of interconnecting the signal reference common and equipment case grounds.
The diagram below shows potentials of the A (blue) and B (red) pins of an RS-485 line during transmission of one byte (0xD3, least significant bit first) of data using an asynchronous start-stop method.

A signal shown in blue, B in red
USPTO Madison Building Exterior
Interior atrium of the USPTO Madison Building





The USPTO has been criticized for taking an inordinate amount of time in examining patent applications. This is particularly true in the fast-growing area of business method patents. As of 2005, patent examiners in the business method area were still examining patent applications filed in 2001.

The USPTO has been criticized for taking an inordinate amount of time in examining patent applications. This is particularly true in the fast-growing area of business method patents. As of 2005, patent examiners in the business method area were still examining patent applications filed in 2001.

The United States Patent and Trademark Office (USPTO) is an agency in the U.S. Department of Commerce that issues patents to inventors and businesses for their inventions, and trademark registration for product and intellectual property identification.
The USPTO is "unique among federal agencies because it operates solely on fees collected by its users, and not on taxpayer dollars". Its "operating structure is like a business in that it receives requests for services—applications for patents and trademark registrations—and charges fees projected to cover the cost of performing the services [it] provide[s]".
The USPTO is based in Alexandria, Virginia, after a 2005 move from the Crystal City area of neighboring Arlington, Virginia. The offices under Patents and the Chief Information Officer that remained just outside the southern end of Crystal City completed moving to Randolph Square, a brand-new building in Shirlington Village, on April 27, 2009.
The Office is headed by the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office, a position last held by Andrei Iancu until he left office on January 20, 2021. As of March 2021, Commissioner of Patents Drew Hirshfeld is performing the functions of the Under Secretary and Director in the absence of an appointment or nomination to the positions.
The USPTO cooperates with the European Patent Office (EPO) and the Japan Patent Office (JPO) as one of the Trilateral Patent Offices. The USPTO is also a Receiving Office, an International Searching Authority and an International Preliminary Examination Authority for international patent applications filed in accordance with the Patent Cooperation Treaty.
The USPTO maintains a permanent, interdisciplinary historical record of all U.S. patent applications in order to fulfill objectives outlined in the United States Constitution. The legal basis for the United States patent system is Article 1, Section 8, wherein the powers of Congress are defined.
Signboard of US Patent Office Sign Alexandria
It states, in part:
The Congress shall have Power ... To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
The PTO's mission is to promote "industrial and technological progress in the United States and strengthen the national economy" by:
- Administering the laws relating to patents and trademarks;
- Advising the Secretary of Commerce, the President of the United States, and the administration on patent, trademark, and copyright protection; and
- Providing advice on the trade-related aspects of intellectual property.
USPTO Madison Building Exterior
Interior atrium of the USPTO Madison Building
The USPTO is headquartered at the Alexandria Campus, consisting of 11 buildings in a city-like development surrounded by ground floor retail and high rise residential buildings between the Metro stations of King Street station (the main search building is two blocks due south of the King Street station) and Eisenhower Avenue station where the actual Alexandria Campus is located between Duke Street (on the North) to Eisenhower Avenue (on the South), and between John Carlyle Street (on the East) to Elizabeth Lane (on the West) in Alexandria, Virginia. An additional building in Arlington, Virginia, was opened in 2009.
USPTO satellite office in San Jose, California
The USPTO was expected by 2014 to open its first ever satellite offices in Detroit, Dallas, Denver, and Silicon Valley to reduce backlog and reflect regional industrial strengths. The first satellite office opened in Detroit on July 13, 2012. In 2013, due to the budget sequestration, the satellite office for Silicon Valley, which is home to one of the nation's top patent-producing cities, was put on hold. However, renovation and infrastructure updates continued after the sequestration, and the Silicon Valley location opened in the San Jose City Hall in 2015.
As of September 30, 2009, the end of the U.S. government's fiscal year, the PTO had 9,716 employees, nearly all of whom are based at its five-building headquarters complex in Alexandria. Of those, 6,242 were patent examiners (almost all of whom were assigned to examine utility patents; only 99 were assigned to examine design patents) and 388 were trademark examining attorneys; the rest are support staff. While the agency has noticeably grown in recent years, the rate of growth was far slower in fiscal 2009 than in the recent past; this is borne out by data from fiscal 2005 to the present: As of the end of FY 2018, the USPTO was composed of 12,579 federal employees, including 8,185 patent examiners, 579 trademark examiners, and 3,815 other staff.


Patent examiners make up the bulk of the employees at USPTO. They hold degrees in various scientific disciplines, but do not necessarily hold law degrees. Unlike patent examiners, trademark examiners must be licensed attorneys.
All examiners work under a strict, "count"-based production system. For every application, "counts" are earned by composing, filing, and mailing a first office action on the merits, and upon disposal of an application.
The Commissioner for Patents oversees three main bodies, headed by former Deputy Commissioner for Patent Operations, currently Peggy Focarino, the Deputy Commissioner for Patent Examination Policy, currently Andrew Hirshfeld as Acting Deputy, and finally the Commissioner for Patent Resources and Planning, which is currently vacant. The Patent Operations of the office is divided into nine different technology centers that deal with various arts.
Prior to 2012, decisions of patent examiners could be appealed to the Board of Patent Appeals and Interferences, an administrative law body of the USPTO. Decisions of the BPAI could further be appealed to the United States Court of Appeals for the Federal Circuit, or a civil suit could be brought against the Commissioner of Patents in the United States District Court for the Eastern District of Virginia. The United States Supreme Court may ultimately decide on a patent case. Under the America Invents Act, the BPAI was converted to the Patent Trial and Appeal Board or "PTAB".
Similarly, decisions of trademark examiners may be appealed to the Trademark Trial and Appeal Board, with subsequent appeals directed to the Federal Circuit, or a civil action may also be brought.
In recent years, the USPTO has seen increasing delays between when a patent application is filed and when it issues. To address its workload challenges, the USPTO has undertaken an aggressive program of hiring and recruitment. The USPTO hired 1,193 new patent examiners in Fiscal Year 2006 (year ending September 30, 2006), 1,215 new examiners in fiscal 2007, and 1,211 in fiscal year 2008. The USPTO expected to continue hiring patent examiners at a rate of approximately 1,200 per year through 2012; however, due to a slowdown in new application filings since the onset of the late-2000s economic crisis, and projections of substantial declines in maintenance fees in coming years, the agency imposed a hiring freeze in early March 2009.
In 2006, USPTO instituted a new training program for patent examiners called the "Patent Training Academy". It is an eight-month program designed to teach new patent examiners the fundamentals of patent law, practice and examination procedure in a college-style environment. Because of the impending USPTO budget crisis previously alluded to, it had been rumored that the Academy would be closed by the end of 2009. Focarino, then Acting Commissioner for Patents, denied in a May 2009 interview that the Academy was being shut down, but stated that it would be cut back because the hiring goal for new examiners in fiscal 2009 was reduced to 600. Ultimately, 588 new patent examiners were hired in fiscal year 2009.
In 2016, the USPTO partnered with the Girl Scouts of the USA to create an "Intellectual Property Patch" merit badge, which is awarded to Girl Scouts at four different levels.
In October 2021, President Joe Biden nominated attorney Kathi Vidal to serve as the USPTO director.

For many years, Congress has "diverted" about 10% of the fees that the USPTO collected into the general treasury of the United States. In effect, this took money collected from the patent system to use for the general budget. This fee diversion has been generally opposed by patent practitioners (e.g., patent attorneys and patent agents), inventors, the USPTO, as well as former federal judge Paul R. Michel. These stakeholders would rather use the funds to improve the patent office and patent system, such as by implementing the USPTO's 21st Century Strategic Plan. The last six annual budgets of the George W. Bush administration did not propose to divert any USPTO fees, and the first budget of the Barack Obama administration continues this practice; however, stakeholders continue to press for a permanent end to fee diversion.
The discussion of which party can appropriate the fees is more than a financial question. Patent fees represent a policy lever that influences both the number of applications submitted to the office as well as their quality.
On July 31, 1790, the first U.S. patent was issued to Samuel Hopkins for an improvement "in the making of Pot ash and Pearl ash by a new Apparatus and Process". This patent was signed by then President George Washington.
The X-Patents (the first 10,280 issued between 1790 and 1836) were destroyed by a fire; fewer than 3,000 of those have been recovered and re-issued with numbers that include an "X". The X generally appears at the end of the numbers hand-written on full-page patent images; however, in patent collections and for search purposes, the X is considered to be the patent type – analogous to the "D" of design patents – and appears at the beginning of the number. The X distinguishes the patents from those issued after the fire, which began again with patent number 1.
Each year, the PTO issues over 150,000 patents to companies and individuals worldwide. As of December 2011, the PTO has granted 8,743,423 patents and has received 16,020,302 applications.
On June 19, 2018, the 10 millionth U.S. patent was issued to Joseph Marron for invention of a "Coherent LADAR [System] Using Intra-Pixel Quadrature Detection" to improve laser detection and ranging (LADAR). The patent was the first to receive the newly redesigned patent cover. It was signed by President Donald Trump during a special ceremony at the Oval Office.
The USPTO examines applications for trademark registration, which can be filed under five different filing bases: use in commerce, intent to use, foreign application, foreign registration, or international registration. If approved, the trademarks are registered on either the Principal Register or the Supplemental Register, depending upon whether the mark meets the appropriate distinctiveness criteria. This federal system governs goods and services distributed via interstate commerce, and operates alongside state level trademark registration systems.
Trademark applications have grown substantially in recent years, jumping from 296,490 new applications in 2000, to 345,000 new applications in 2014, to 458,103 new applications in 2018. Recent growth has been driven partially by growing numbers of trademark applications originating in China; trademark applications from China have grown more than 12-fold since 2013, and in 2017, one in every nine trademark applications reviewed by the U.S. Trademark Office originated in China.
Since 2008, the Trademark Office has hosted a National Trademark Expo every two years, billing it as "a free, family-friendly event designed to educate the public about trademarks and their importance in the global marketplace." The Expo features celebrity speakers such as Anson Williams (of the television show Happy Days) and basketball player Kareem Abdul-Jabbar and has numerous trademark-holding companies as exhibitors. Before the 2009 National Trademark Expo, the Trademark Office designed and launched a kid-friendly trademark mascot known as T. Markey, who appears as an anthropomorphized registered trademark symbol. T. Markey is featured prominently on the Kids section of the USPTO website, alongside fellow IP mascots Ms. Pat Pending (with her robot cat GeaRS) and Mark Trademan.
In 2020, trademark applications marked the sharpest declines and inclines in American history. During Spring, COVID-19 pandemic lockdowns led to reduced filings, which then increased in July 2020 to exceed the previous year. August 2020 was subsequently the highest month of trademark filings in the history of the U.S. Patent and Trademark Office.

The USPTO only allows certain qualified persons to practice before the USPTO. Practice includes filing of patent and trademark applications on behalf of individuals and companies, prosecuting the patent and trademark applications, and participating in administrative appeals and other proceedings before the PTO examiners, examining attorneys and boards. The USPTO sets its own standards for who may practice. Any person who practices patent law before the USPTO must become a registered patent attorney or agent. A patent agent is a person who has passed the USPTO registration examination (the "patent bar") but has not passed any state bar exam to become a licensed attorney; a patent attorney is a person who has passed both a state bar and the patent bar and is in good standing as an attorney. A patent agent can only act in a representative capacity in patent matters presented to the USPTO, and may not represent a patent holder or applicant in a court of law. To be eligible for taking the patent bar exam, a candidate must possess a degree in "engineering or physical science or the equivalent of such a degree". Any person who practices trademark law before the USPTO must be an active member in good standing of the highest court of any state.
The United States allows any citizen from any country to sit for the patent bar (if he/she has the requisite technical background). Only Canada has a reciprocity agreement with the United States that confers upon a patent agent similar rights.
An unrepresented inventor may file a patent application and prosecute it on his or her own behalf (pro se). If it appears to a patent examiner that an inventor filing a pro se application is not familiar with the proper procedures of the Patent Office, the examiner may suggest that the filing party obtain representation by a registered patent attorney or patent agent. The patent examiner cannot recommend a specific attorney or agent, but the Patent Office does post a list of those who are registered.
While the inventor of a relatively simple-to-describe invention may well be able to produce an adequate specification and detailed drawings, there remains language complexity in what is claimed, either in the particular claim language of a utility application, or in the manner in which drawings are presented in a design application. There is also skill required when searching for prior art that is used to support the application and to prevent applying for a patent for something that may be unpatentable. A patent examiner will make special efforts to help pro se inventors understand the process but the failure to adequately understand or respond to an Office action from the USPTO can endanger the inventor's rights, and may lead to abandonment of the application.
The USPTO accepts patent applications filed in electronic form. Inventors or their patent agents/attorneys can file applications as Adobe PDF documents. Filing fees can be paid by credit card or by a USPTO "deposit account".
The lobby of the Public Search Facility, looking out toward the atrium, inside the Madison Building of the USPTO. The bronze bust of Thomas Jefferson is at the far right. Researchers can access patent search databases within the facility.
The USPTO web site provides free electronic copies of issued patents and patent applications as multiple-page TIFF (graphic) documents. The site also provides Boolean search and analysis tools.
The USPTO's free distribution service only distributes the patent documents as a set of TIFF files. Numerous free and commercial services provide patent documents in other formats, such as Adobe PDF and CPC.
CriticismsElectronic filing system
The USPTO accepts patent applications filed in electronic form. Inventors or their patent agents/attorneys can file applications as Adobe PDF documents. Filing fees can be paid by credit card or by a USPTO "deposit account".
Patent search tools
The lobby of the Public Search Facility, looking out toward the atrium, inside the Madison Building of the USPTO. The bronze bust of Thomas Jefferson is at the far right. Researchers can access patent search databases within the facility.
The USPTO web site provides free electronic copies of issued patents and patent applications as multiple-page TIFF (graphic) documents. The site also provides Boolean search and analysis tools.
The USPTO's free distribution service only distributes the patent documents as a set of TIFF files. Numerous free and commercial services provide patent documents in other formats, such as Adobe PDF and CPC.
The USPTO has been criticized for granting patents for impossible or absurd, already known, or arguably obvious inventions. Economists have documented that, although the USPTO makes mistakes when granting patents, these mistakes might be less prominent than some might believe.
U.S. Patent 5,443,036, "Method of exercising a cat", covers having a cat chase the beam from a laser pointer. The patent has been criticized as being obvious.
U.S. Patent 6,004,596, "Sealed crustless sandwich", issued in 1999, covers the design of a sandwich with crimped edges. However, all claims of the patent were subsequently canceled by the PTO upon reexamination.
U.S. Patent 6,025,810, "Hyper-light-speed antenna", an antenna that sends signals faster than the speed of light. According to the description in the patent, "The present invention takes a transmission of energy, and instead of sending it through normal time and space, it pokes a small hole into another dimension, thus, sending the energy through a place which allows transmission of energy to exceed the speed of light."
U.S. Patent 6,368,227, "Method of swinging on a swing", issued April 9, 2002, was granted to a seven-year-old boy, whose father, a patent attorney, wanted to demonstrate how the patent system worked to his son who was five years old at the time of the application. The PTO initially rejected it due to prior art, but eventually issued the patent. However, all claims of the patent were subsequently canceled by the PTO upon reexamination.
U.S. Patent 6,960,975, "Space vehicle propelled by the pressure of inflationary vacuum state", describes an anti-gravity device. In November 2005, the USPTO was criticized by physicists for granting it. The journal Nature first highlighted this patent issued for a device that presumably amounts to a perpetual motion machine, defying the laws of physics. The device comprises a particular electrically superconducting shield and electromagnetic generating device. The examiner allowed the claims because the design of the shield and device was novel and not obvious. In situations such as this where a substantial question of patentability is raised after a patent is issued, the Commissioner of the Patent Office can order a reexamination of the patent.
U.S. Trademark 77,139,082, "Cloud Computing" for Dell, covering "custom manufacture of computer hardware for use in data centers and mega-scale computing environments for others", was allowed by a trademark attorney on July 8, 2008. Cloud computing is a generic term that could define technology infrastructure for years to come, which had been in general use at the time of the application. The application was rejected on August 12, 2008, as descriptive and generic.
U.S. Trademark 75,215,401, "Netbook" for Psion, covering "laptop computers" was registered on November 21, 2000. Although the company discontinued the netBook line in November 2003 and allowed the trademark to become genericized through use by journalists and vendors (products marketed as 'netbooks' include the Dell Inspiron Mini Series, Asus eeePC, HP Mini 1000, MSI Wind Netbook and others), USPTO subsequently rejected a number of trademarks citing a "likelihood of confusion" under section 2(d), including 'G NETBOOK' (U.S. Trademark 77,527,311 rejected October 31, 2008), MSI's 'WIND NETBOOK' (U.S. Trademark ) and Coby Electronics' 'COBY NETBOOK' (U.S. Trademark 77,590,174) rejected January 13, 2009. Psion also delivered a batch of cease-and-desist letters on December 23, 2008, relating to the genericized trademark.

The USPTO has been criticized for taking an inordinate amount of time in examining patent applications. This is particularly true in the fast-growing area of business method patents. As of 2005, patent examiners in the business method area were still examining patent applications filed in 2001.
The delay was attributed by spokesmen for the Patent Office to a combination of a sudden increase in business method patent filings after the 1998 State Street Bank decision, the unfamiliarity of patent examiners with the business and financial arts (e.g., banking, insurance, stock trading etc.), and the issuance of a number of controversial patents (e.g., U.S. Patent 5,960,411 "Amazon one click patent") in the business method area.
Effective August 2006, the USPTO introduced an accelerated patent examination procedure in an effort to allow inventors a speedy evaluation of an application with a final disposition within twelve months. The procedure requires additional information to be submitted with the application and also includes an interview with the examiner. The first accelerated patent was granted on March 15, 2007, with a six-month issuance time.
As of the end of 2008, there were 1,208,076 patent applications pending at the Patent Office. At the end of 1997, the number of applications pending was 275,295. Therefore, over those eleven years there was a 439% increase in the number of pending applications.
December 2012 data showed that there was 597,579 unexamined patent applications in the backlog. During the four years since 2009, more than a 50% reduction was achieved. First action pendency was reported as 19.2 months.
In 2012, the USPTO initiated an internal investigation into allegations of fraud in the telework program, which allowed employees to work from home. Investigators discovered that some patent examiners had lied about the hours they had worked, but high level officials prevented access to computer records, thus limiting the number of employees who could be punished.
Central processing unit (CPU; also central processing unit - CPU; English central processing unit, CPU, literally - central processing unit, often just a processor) - an electronic unit or an integrated circuit that executes machine instructions (program code), the main part of computer hardware or programmable logic controller. Sometimes referred to as a microprocessor or simply a processor.
Central processing unit

Central processing unit (CPU; also central processing unit - CPU; English central processing unit, CPU, literally - central processing unit, often just a processor) - an electronic unit or an integrated circuit that executes machine instructions (program code), the main part of computer hardware or programmable logic controller. Sometimes referred to as a microprocessor or simply a processor.
Initially, the term central processing unit described a specialized class of logical machines designed to execute complex computer programs. Due to the rather exact correspondence of this purpose to the functions of the computer processors that existed at that time, it was naturally transferred to the computers themselves. The beginning of the use of the term and its abbreviation in relation to computer systems was laid in the 1960s. The device, architecture and implementation of processors have changed many times since then, but their main executable functions have remained the same as before.
The main characteristics of the CPU are: clock speed, performance, power consumption, the norms of the lithographic process used in production (for microprocessors), and architecture.
Early CPUs were designed as unique building blocks for unique and even one-of-a-kind computer systems. Later, from the expensive method of developing processors designed to execute one single or several highly specialized programs, computer manufacturers switched to serial production of typical classes of multi-purpose processor devices. The trend towards standardization of computer components began in the era of the rapid development of semiconductors, mainframes and minicomputers, and with the advent of integrated circuits, it has become even more popular. The creation of microcircuits allowed further increasing the complexity of the CPU while reducing their physical size. The standardization and miniaturization of processors have led to a deep penetration of digital devices based on them into everyday life. Modern processors can be found not only in high-tech devices such as computers, but also in cars, calculators, mobile phones, and even children's toys. Most often they are represented by microcontrollers, where, in addition to the computing device, additional components are located on the chip (program and data memory, interfaces, input-output ports, timers, etc.). Modern computing capabilities of the microcontroller are comparable to personal computer processors of thirty years ago, and more often even significantly exceed their performance.
The history of the development of the production of processors is fully consistent with the history of the development of technology for the production of other electronic components and circuits.
The first stage, which affected the period from the 1940s to the late 1950s, was the creation of processors using electromechanical relays, ferrite cores (memory devices) and vacuum tubes. They were installed in special slots on modules assembled into racks. A large number of such racks, connected by conductors, represented a processor in total. Distinctive features were low reliability, low speed and high heat dissipation.
The second stage, from the mid-1950s to the mid-1960s, was the introduction of transistors. Transistors were already mounted on boards close to modern in appearance, installed in racks. As before, the average processor consisted of several such racks. Increased performance, improved reliability, reduced power consumption.
The third stage, which came in the mid-1960s, was the use of microchips. Initially, microcircuits of a low degree of integration were used, containing simple transistor and resistor assemblies, then, as the technology developed, microcircuits that implement individual elements of digital circuitry began to be used (first elementary keys and logic elements, then more complex elements - elementary registers, counters, adders) , later microcircuits appeared containing functional blocks of the processor - a microprogram device, an arithmetic-logical unit, registers, devices for working with data and command buses.
The fourth stage, in the early 1970s, was the creation, thanks to a breakthrough in technology, LSI and VLSI (large and extra-large integrated circuits, respectively), a microprocessor - a microcircuit, on the crystal of which all the main elements and blocks of the processor were physically located. Intel in 1971 created the world's first 4-bit microprocessor 4004, designed for use in calculators. Gradually, almost all processors began to be produced in the microprocessor format. For a long time, the only exceptions were small-scale processors hardware-optimized for solving special problems (for example, supercomputers or processors for solving a number of military tasks) or processors that had special requirements for reliability, speed, or protection from electromagnetic pulses and ionizing radiation. Gradually, with the reduction in cost and the spread of modern technologies, these processors are also beginning to be manufactured in the microprocessor format.
Now the words "microprocessor" and "processor" have practically become synonymous, but then it was not so, because conventional (large) and microprocessor computers coexisted peacefully for at least 10-15 years, and only in the early 1980s microprocessors have supplanted their older counterparts. Nevertheless, the central processing units of some supercomputers even today are complex complexes built on the basis of microcircuits of a large and ultra-large degree of integration.
The transition to microprocessors then allowed the creation of personal computers, which penetrated almost every home.
The first publicly available microprocessor was the 4-bit Intel 4004, introduced on November 15, 1971 by Intel Corporation. It contained 2300 transistors, ran at a clock frequency of 92.6 kHz and cost $300.
Then it was replaced by the 8-bit Intel 8080 and 16-bit 8086, which laid the foundation for the architecture of all modern desktop processors. Due to the prevalence of 8-bit memory modules, the cheap 8088 was released, a simplified version of the 8086 with an 8-bit data bus.
This was followed by its modification, 80186.
The 80286 processor introduced a protected mode with 24-bit addressing, which allowed the use of up to 16 MB of memory.
The Intel 80386 processor appeared in 1985 and introduced an improved protected mode, 32-bit addressing that allowed up to 4 GB of RAM and support for a virtual memory mechanism. This line of processors is built on a register computing model.
In parallel, microprocessors are developing, based on the stack computing model.
Over the years, microprocessors have developed many different architectures. Many of them (in supplemented and improved form) are still used today. For example, Intel x86, which developed first into 32-bit IA-32, and later into 64-bit x86-64 (which Intel calls EM64T). x86 architecture processors were originally used only in IBM personal computers (IBM PCs), but are now increasingly used in all areas of the computer industry, from supercomputers to embedded solutions. You can also list architectures such as Alpha, POWER, SPARC, PA-RISC, MIPS (RISC architectures) and IA-64 (EPIC architecture).
In modern computers, processors are made in the form of a compact module (about 5 × 5 × 0.3 cm in size) that is inserted into a ZIF socket (AMD) or on a
Most modern personal computer processors are generally based on some version of the cyclic serial processing process described by John von Neumann.
In July 1946, Burks, Goldstein, and von Neumann wrote a famous monograph entitled "A Preliminary Consideration of the Logical Structure of an Electronic Computing Device", which described in detail the device and technical characteristics of the future computer, which later became known as the "von Neumann architecture". This work developed the ideas outlined by von Neumann in May 1945 in a manuscript entitled "The First Draft of a Report on the EDVAC".
A distinctive feature of the von Neumann architecture is that instructions and data are stored in the same memory.
Different architectures and different commands may require additional steps. For example, arithmetic instructions may require additional memory accesses during which operands are read and results are written.
Run cycle steps:
The processor sets the number stored in the program counter register to the address bus and issues a read command to the memory.
The exposed number is the memory address; memory, having received the address and the read command, exposes the contents stored at this address to the data bus and reports readiness.
The processor receives a number from the data bus, interprets it as a command (machine instruction) from its instruction set, and executes it.
If the last instruction is not a jump instruction, the processor increments by one (assuming each instruction length is one) the number stored in the instruction counter; as a result, the address of the next instruction is formed there.
This cycle is executed invariably, and it is he who is called the process (hence the name of the device).
During a process, the processor reads a sequence of instructions contained in memory and executes them. Such a sequence of commands is called a program and represents the algorithm of the processor. The order of reading commands changes if the processor reads a jump command, then the address of the next command may be different. Another example of a process change would be when a stop command is received, or when it switches to interrupt service.
The commands of the central processor are the lowest level of computer control, so the execution of each command is inevitable and unconditional. No check is made on the admissibility of the actions performed, in particular, the possible loss of valuable data is not checked. In order for the computer to perform only legal actions, the commands must be properly organized into the desired program.
The speed of transition from one stage of the cycle to another is determined by the clock generator. The clock generator generates pulses that serve as a rhythm for the central processor. The frequency of the clock pulses is called the clock frequency.
Pipeline architecture (eng. pipelining) was introduced into the central processor in order to increase performance. Usually, to execute each instruction, it is required to perform a number of operations of the same type, for example: fetching an instruction from RAM, decrypting an instruction, addressing an operand to RAM, fetching an operand from RAM, executing an instruction, writing a result to RAM. Each of these operations is associated with one stage of the conveyor. For example, a MIPS-I microprocessor pipeline contains four stages:
- receiving and decoding instructions,
- addressing and fetching an operand from RAM,
- performing arithmetic operations,
- save the result of the operation.
After the release of the k-th stage of the pipeline, it immediately starts working on the next command. If we assume that each stage of the pipeline spends a unit of time for its work, then the execution of an instruction on a pipeline with a length of n stages will take n units of time, but in the most optimistic case, the result of executing each following instruction will be obtained every unit of time.
Indeed, in the absence of a pipeline, the execution of an instruction will take n units of time (since the execution of an instruction still requires fetching, decryption, etc.), and m instructions will require n ⋅ m units of time; when using a pipeline (in the most optimistic case), it takes only n+m units of time to execute m instructions.
Factors that reduce the efficiency of the conveyor:
1. A simple pipeline when some stages are not used (for example, addressing and fetching an operand from RAM is not needed if the instruction works with registers).
2. Waiting: if the next command uses the result of the previous one, then the last one cannot start executing before the execution of the first one (this is overcome by using out-of-order execution of commands).
3. Clearing the pipeline when a branch instruction hits it (this problem can be smoothed out using branch prediction).
Some modern processors have more than 30 stages in the pipeline, which improves the performance of the processor, but, however, leads to an increase in idle time (for example, in the event of an error in conditional branch prediction). There is no consensus on the optimal pipeline length: different programs may have significantly different requirements.
The ability to execute multiple machine instructions in one processor cycle by increasing the number of execution units. The emergence of this technology has led to a significant increase in performance, at the same time, there is a certain limit to the growth of the number of executive devices, above which the performance practically stops growing, and the executive devices are idle. A partial solution to this problem is, for example, Hyper-threading technology.
Complex instruction set computer - calculations with a complex set of commands. A processor architecture based on a sophisticated instruction set. Typical representatives of CISC are microprocessors of the x86 family (although for many years these processors have been CISC only by an external instruction system: at the beginning of the execution process, complex instructions are broken down into simpler micro-operations (MOS) executed by the RISC core).
Reduced instruction set computer - calculations with a simplified set of instructions (in the literature, the word reduced is often mistakenly translated as "reduced"). The architecture of processors, built on the basis of a simplified instruction set, is characterized by the presence of fixed-length instructions, a large number of registers, register-to-register operations, and the absence of indirect addressing. The concept of RISC was developed by John Cock of IBM Research, the name was coined by David Patterson.
The simplification of the instruction set is intended to reduce the pipeline, which avoids delays in the operations of conditional and unconditional jumps. A homogeneous set of registers simplifies the work of the compiler when optimizing the executable program code. In addition, RISC processors are characterized by lower power consumption and heat dissipation.
Among the first implementations of this architecture were MIPS, PowerPC, SPARC, Alpha, PA-RISC processors. ARM processors are widely used in mobile devices.
Minimum instruction set computer - calculations with a minimum set of commands. Further development of the ideas of the team of Chuck Moore, who believes that the principle of simplicity, which was originally for RISC processors, has faded into the background too quickly. In the heat of the race for maximum performance, RISC has caught up and overtaken many CISC processors in terms of complexity. The MISC architecture is based on a stack computing model with a limited number of instructions (approximately 20–30 instructions).
Very long instruction word - an extra long instruction word. The architecture of processors with explicitly expressed parallelism of calculations incorporated into the processor instruction set. They are the basis for the EPIC architecture. The key difference from superscalar CISC processors is that for them, a part of the processor (scheduler) is involved in loading the execution devices, which takes a fairly short time, while the compiler is responsible for loading the computing devices for the VLIW processor, which takes much more time. (the quality of the download and, accordingly, the performance should theoretically be higher).
For example, Intel Itanium, Transmeta Crusoe, Efficeon and Elbrus.
Contain several processor cores in one package (on one or more chips).
Processors designed to run a single copy of an operating system on multiple cores are a highly integrated implementation of multiprocessing.
The first multi-core microprocessor was IBM's POWER4, which appeared in 2001 and had two cores.
In October 2004, Sun Microsystems released the UltraSPARC IV dual-core processor, which consisted of two modified UltraSPARC III cores. In early 2005, the dual-core UltraSPARC IV+ was created.
On May 9, 2005, AMD introduced the first dual-core, single-chip processor for consumer PCs, the Athlon 64 X2 with the Manchester core. Shipments of the new processors officially began on June 1, 2005.
On November 14, 2005, Sun released the eight-core UltraSPARC T1, with each core running 4 threads.
On January 5, 2006, Intel introduced the first dual-core processor on a single Core Duo chip for a mobile platform.
In November 2006, the first quad-core Intel Core 2 Quad processor based on the Kentsfield core was released, which is an assembly of two Conroe crystals in one package. A descendant of this processor was the Intel Core 2 Quad on the Yorkfield core (45 nm), which is architecturally similar to Kentsfield, but has a larger cache and operating frequencies.
In October 2007, eight-core UltraSPARC T2s went on sale, each core running 8 threads.
On September 10, 2007, real (in the form of a single chip) quad-core processors for AMD Opteron servers were released for sale, which had the code name AMD Opteron Barcelona during development. November 19, 2007 went on sale quad-core processor for home computers AMD Phenom. These processors implement the new K8L (K10) microarchitecture.
AMD went its own way, manufacturing quad-core processors on a single die (unlike Intel, whose first quad-core processors are actually gluing together two dual-core dies). Despite all the progressiveness of this approach, the first "quad-core" of the company, called AMD Phenom X4, was not very successful. Its lagging behind contemporary competitor processors ranged from 5 to 30 percent or more, depending on the model and specific tasks.
By the 1st-2nd quarter of 2009, both companies updated their lines of quad-core processors. Intel introduced the Core i7 family, which consists of three models running at different frequencies. The main highlights of this processor is the use of a three-channel memory controller (DDR3 type) and eight-core emulation technology (useful for some specific tasks). In addition, thanks to the general optimization of the architecture, it was possible to significantly improve the performance of the processor in many types of tasks. The weak side of the platform that uses the Core i7 is its excessive cost, since the installation of this processor requires an expensive motherboard based on the Intel X58 chipset and a three-channel DDR3 memory kit, which is also currently very expensive.
AMD, in turn, introduced a line of Phenom II X4 processors. During its development, the company took into account its mistakes: the cache size was increased (compared to the first generation Phenom), processors began to be manufactured using the 45-nm process technology (this, accordingly, allowed to reduce heat dissipation and significantly increase operating frequencies). In general, AMD Phenom II X4 is on a par with Intel's previous generation processors (Yorkfield core) in terms of performance and lags far behind the Intel Core i7. With the release of the 6-core processor AMD Phenom II X6 Black Thuban 1090T, the situation has changed slightly in favor of AMD.
As of 2013, processors with two, three, four and six cores, as well as two-, three- and four-module AMD processors of the Bulldozer generation are widely available (the number of logical cores is 2 times more than the number of modules). In the server segment, 8-core Xeon and Nehalem processors (Intel) and 12-core Opterons (AMD) are also available.
Caching is the use of additional high-speed memory (the so-called cache - English cache, from French cacher - “hide”) to store copies of blocks of information from the main (RAM) memory, the probability of accessing which is high in the near future.
There are caches of the 1st, 2nd and 3rd levels (denoted by L1, L2 and L3 - from Level 1, Level 2 and Level 3). The 1st level cache has the lowest latency (access time), but a small size, in addition, 1st level caches are often made multiported. So, AMD K8 processors were able to perform both 64-bit write and read, or two 64-bit reads per clock, AMD K8L can perform two 128-bit reads or writes in any combination. Intel Core 2 processors can do 128-bit writes and reads per clock. A L2 cache usually has significantly higher access latency, but it can be made much larger. Level 3 cache is the largest and is quite slow, but it is still much faster than RAM.
The von Neumann architecture has the disadvantage of being sequential. No matter how huge the data array needs to be processed, each of its byte will have to go through the central processor, even if the same operation is required on all the bytes. This effect is called the von Neumann bottleneck.
To overcome this shortcoming, processor architectures, which are called parallel, have been proposed and are being proposed. Parallel processors are used in supercomputers.
Possible options for parallel architecture are (according to Flynn's classification):
SISD - one command stream, one data stream;
SIMD - one instruction stream, many data streams;
MISD - many command streams, one data stream;
MIMD - many command streams, many data streams.
For digital signal processing, especially with limited processing time, specialized high-performance signal microprocessors (digital signal processor, DSP) with a parallel architecture are used.
Initially, the developers are given a technical task, on the basis of which a decision is made about what the architecture of the future processor will be, its internal structure, manufacturing technology. Various groups are tasked with developing the corresponding functional blocks of the processor, ensuring their interaction, and electromagnetic compatibility. Due to the fact that the processor is actually a digital machine that fully complies with the principles of Boolean algebra, a virtual model of the future processor is built using specialized software running on another computer. It tests the processor, executes elementary commands, significant amounts of code, works out the interaction of various blocks of the device, optimizes it, and looks for errors that are inevitable in a project of this level.
After that, from digital basic matrix crystals and microcircuits containing elementary functional blocks of digital electronics, a physical model of the processor is built, on which the electrical and temporal characteristics of the processor are checked, the processor architecture is tested, the correction of errors found continues, and electromagnetic compatibility issues are clarified (for example, with almost ordinary at a clock frequency of 1 GHz, 7 mm lengths of conductor already work as transmitting or receiving antennas).
Then begins the stage of joint work of circuit engineers and process engineers who, using specialized software, convert the electrical circuit containing the processor architecture into a chip topology. Modern automatic design systems make it possible, in the general case, to directly obtain a package of stencils for creating masks from an electrical circuit. At this stage, technologists are trying to implement the technical solutions laid down by circuit engineers, taking into account the available technology. This stage is one of the longest and most difficult to develop and rarely requires compromises on the part of circuit designers to abandon some architectural decisions. A number of manufacturers of custom microcircuits (foundry) offer developers (design center or fabless) a compromise solution, in which at the stage of designing the processor, the libraries of elements and blocks (Standard cell) presented by them, standardized in accordance with the available technology, are used. This introduces a number of restrictions on architectural solutions, but the stage of technological adjustment actually comes down to playing Lego. In general, custom microprocessors are faster than processors based on existing libraries.
8 inch multi-chip silicon wafer
Main article: Technological process in the electronics industry

The next, after the design phase, is the creation of a microprocessor chip prototype. In the manufacture of modern ultra-large integrated circuits, the lithography method is used. At the same time, layers of conductors, insulators and semiconductors are alternately applied to the substrate of the future microprocessor (a thin circle of single-crystal silicon or sapphire) through special masks containing slots. The corresponding substances are evaporated in vacuum and deposited through the holes of the mask on the processor chip. Sometimes etching is used, when an aggressive liquid corrodes areas of the crystal that are not protected by a mask. At the same time, about a hundred processor chips are formed on the substrate. The result is a complex multilayer structure containing hundreds of thousands to billions of transistors. Depending on the connection, the transistor works in the microcircuit as a transistor, resistor, diode or capacitor. Creating these elements on a chip separately, in the general case, is unprofitable. After the end of the lithography procedure, the substrate is sawn into elementary crystals. To the pads formed on them (made of gold), thin gold conductors are soldered, which are adapters to the contact pads of the microcircuit case. Further, in the general case, the heat sink of the crystal and the chip cover are attached.
Then the stage of testing the processor prototype begins, when its compliance with the specified characteristics is checked, and the remaining undetected errors are searched for. Only after that the microprocessor is put into production. But even during production, there is a constant optimization of the processor associated with the improvement of technology, new design solutions, and error detection.
Simultaneously with the development of universal microprocessors, sets of peripheral computer circuits are being developed that will be used with the microprocessor and on the basis of which motherboards are created. The development of a microprocessor set (chipset, English chipset) is a task no less difficult than the creation of the actual microprocessor chip.
In the last few years, there has been a tendency to transfer part of the chipset components (memory controller, PCI Express bus controller) into the processor.
The power consumption of the processor is closely related to the manufacturing technology of the processor.
The first x86 architecture processors consumed a very small (by modern standards) amount of power, which is a fraction of a watt. An increase in the number of transistors and an increase in the clock frequency of processors led to a significant increase in this parameter. The most productive models consume 130 or more watts. The power consumption factor, which was insignificant at first, now has a serious impact on the evolution of processors:
- improvement of production technology to reduce consumption, search for new materials to reduce leakage currents, lowering the supply voltage of the processor core;
- the appearance of sockets (sockets for processors) with a large number of contacts (more than 1000), most of which are designed to power the processor. So, processors for the popular LGA775 socket have 464 main power contacts (about 60% of the total);
- changing the layout of processors. The processor crystal has moved from the inside to the outside for better heat dissipation to the cooling system radiator;
- installation of temperature sensors in the crystal and an overheating protection system that reduces the frequency of the processor or even stops it if the temperature rises unacceptably;
- the appearance in the latest processors of intelligent systems that dynamically change the supply voltage, the frequency of individual blocks and processor cores, and disable unused blocks and cores;
- the emergence of energy-saving modes for "falling asleep" processor at low load.
Another CPU parameter is the maximum allowable temperature of a semiconductor crystal (TJMax) or the surface of the processor, at which normal operation is possible. Many consumer processors operate at surface (chip) temperatures no higher than 85 °C. The temperature of the processor depends on its workload and on the quality of the heat sink. If the temperature exceeds the maximum allowed by the manufacturer, there is no guarantee that the processor will function normally. In such cases, errors in the operation of programs or a computer freeze may occur. In some cases, irreversible changes within the processor itself are possible. Many modern processors can detect overheating and limit their own performance in this case.
Passive heatsinks and active coolers are used to remove heat from microprocessors. For better contact with the heatsink, thermal paste is applied to the surface of the processor.


To measure the temperature of the microprocessor, usually inside the microprocessor, a microprocessor temperature sensor is installed in the center of the microprocessor cover. In Intel microprocessors, the temperature sensor is a thermal diode or a transistor with a closed collector and base as a thermal diode, in AMD microprocessors it is a thermistor.
The most popular processors today produce:
- for personal computers, laptops and servers - Intel and AMD;
- for supercomputers - Intel and IBM;
- for graphics accelerators and high performance computing - NVIDIA and AMD
- for mobile phones and tablets[9] - Apple, Samsung, HiSilicon and Qualcomm.
Most processors for personal computers, laptops and servers are Intel-compatible in terms of instructions. Most of the processors currently used in mobile devices are ARM-compatible, that is, they have a set of instructions and programming interfaces developed by ARM Limited.
Intel processors: 8086, 80286, i386, i486, Pentium, Pentium II, Pentium III, Celeron (simplified Pentium), Pentium 4, Core 2 Duo, Core 2 Quad, Core i3, Core i5, Core i7, Core i9, Xeon (series of processors for servers), Itanium, Atom (series of processors for embedded technology), etc.
AMD has in its line of processors x86 architecture (analogues 80386 and 80486, K6 family and K7 family - Athlon, Duron, Sempron) and x86-64 (Athlon 64, Athlon 64 X2, Phenom, Opteron, etc.). IBM processors (POWER6, POWER7, Xenon, PowerPC) are used in supercomputers, 7th generation video set-top boxes, embedded technology; previously used in Apple computers.
Market shares of sales of processors for personal computers, laptops and servers by years:

Loongson Family (Godson)
- ShenWei Family (SW)
- YinHeFeiTeng Family (FeiTeng)
- NEC VR (MIPS, 64 bit)
- Hitachi SH (RISC)
A common misconception among consumers is that higher clocked processors always perform better than lower clocked processors. In fact, performance comparisons based on clock speed comparisons are only valid for processors with the same architecture and microarchitecture.
Central processing unit
1959 - 9892 people
1970 - 14940 people
1979 - 14373 people
1989 - 17014 people
1991 - 17100 people
1999 - 13096 people
2004 - 11157 people
2005 - 10944 people
2006 - 10701 people
2007 - 10568 people
2008 - 10386 people
2009 - 11551 people
2010 - 11435 people
2011 - 11250 people
2012 - 11141 people
2013 - 10985 people
2014 - 10929 people
2015 - 10785 people
2016 - 10697 people
2017 - 10662 people
2018 - 10608 people
2019 - 10546 people
2020 - 13504 people


Polkadot is a scalable heterogeneous multichain. Polkadot allows new blockchain projects to communicate and integrate security while allowing them to have completely arbitrary state transition functions. Polkadot is a network that connects blockchains.
Polkadot provides a framework within which new blockchains can be created and to which existing blockchains can be migrated if their associations so desire. Joint security - trust - free transactional ability within the network. When it comes to blockchain, the level of enthusiasm often exceeds any doubt. For example, if a particular blockchain is a separate immutable chain, how can I transfer my data to another blockchain? The answer is that it's impossible. That's why Polkadot was created.
The official website says that Polkadot is a "multi-chain technology". In simple terms, it is a network that connects blockchains. It creates a space where data from different blockchains can be processed and exchanged quickly and securely. An important point - no system-wide updates or hard forks. This is the aim of this project.
The mission of Polkadot is to change the existing structure of the Internet to Web3: a completely new and decentralized web. Polkadot helps connect private and public blockchains and other networks in the Web3 ecosystem. It makes possible the Internet, where independent blockchains can exchange information and transactions without obligation through Polkadot. This is truly a project of the future - the digital world, the Internet of things and network decentralization.
The idea was introduced by Gavin Wood, co-founder of Ethereum and founder of Parity Technologies at the end of 2016. Then, in mid-2017, the Web3 Foundation was created, which manages the project in conjunction with Parity Technologies. In October 2017, a successful ICO was held. We will cover this later, but you can check DOT on coinmarketcap to see the current situation for this token if you wish. Last but not least of the general information, the Polkadot Genesis block will only be launched in the third quarter of 2019. Therefore, some of the information below is not confirmed and is theoretical.

Polkadot consists of many different parachains, which allows you to achieve the required level of anonymity. The peculiarity of the system is that transactions can be carried out simultaneously and distributed between blockchains. The main goal of the Polkadot ecosystem is to ensure that all participating blockchains remain secure and that transactions are carried out in good faith.
The Polkadot ecosystem consists of three different components:
Relay chain. This is the center of the system, which ensures the exchange of transactions between chains. It also guarantees consensus and security.
Parachains. These are “parallel blockchains” that carry out transactions and transfer them to the original blockchain.
Bridges. These are specific links to blockchains with their own consensus, such as Ethereum.

The token in the Polkadot ecosystem is called DOT. It has three main functions:
Control. The owners of the DOT cryptocurrency have full control over the protocol. There are no miners in the ecosystem and it uses a proof-of-stake algorithm, so holders manage all exceptional events such as protocol updates and fixes.
Functioning. The system uses game theory and rewards those DOT owners who behave honestly. Deceivers will lose parts of their share. This principle allows the system to remain secure.
Binding and payment. Cryptocurrency Polkadot (DOT) allows you to add new parachains or remove inactive ones.
The Polkadot ICO was so successful that the tokens ran out on the third day. The value of DOT at that point was 0.109 ETH, so the 5 million available tokens were sold for 485,331 ETH. All tokens are still illiquid - you cannot buy or sell them. All ICO participants will receive their tokens when the Genesis block is launched in 2019.
Polkadot seems to be a very controversial project. On the one hand, the idea is great and takes blockchain technology to the next level. But on the other hand, investments are frozen for two years, and who knows what might happen. Some experts express doubts that the Polkadot team (including the famous Ian Balina) will be able to create a competitive product. Especially when competitors do not stand still (Cosmos, ChainLink, Wanchain, Atomic Cross Chain). Some ICO ratings match them. Polkadot has a rating of 2.5 on Icobench and Icobazaar, Icorating also believes that the risks are medium. In fact, two years is a very long time in the sharp and fast world of blockchain technology.
Another problem is the funds for this project, which were frozen due to a vulnerability in the contract of the Ethereum Parity Wallet library. Among the 500,000 ETH that “disappeared to nowhere”, there were about 330,000 received through the ICO. And the conclusion would be very pessimistic if one detail did not exist. In May of this year, Gavin Woods announced that they would be launching Polkadot's first proof-of-concept (PoC) soon. This means that the patient is more likely to be alive than dead. But still, no one can change anything now - the funds have been invested, and the tokens will appear only in 2019. We just have to wait and see if this becomes a reality.
To protect the Polkadot ecosystem, the Nominated Proof of Stake (NPoS) protocol is used. Viability is managed by four groups of participants with different levels of clearance:
Validators are responsible for the formation of the underlying chain (Relay Chain) and connect to parchains to validate blocks and complete cross-chain transactions. To become a validator, you need to make a certain bet (Polkadot crypto is needed). For the fulfillment of duties, validators are rewarded with DOT tokens, and if malicious actions are detected, they may lose their money.
Nominators are the main guarantors of blockchain security. They appoint and control validators, accepting storage bets from them and receiving a reward for this. Each nominator is responsible for the actions of his wards, his reward depends on the conscientiousness and activity of the validators appointed by him.
Collators are full node owners of any parachain. They form blocks in their chain and submit them for signature to validators, providing them with proof of transactions. In critical situations, they can independently add blocks to their parachain.
The fishermen are the people's control group. Phishermen cannot verify transactions and sign blocks, their task is to catch an unscrupulous “official” (validator, nominator or collator) by the hand. To take part in fishing, you also need to make a bet, but much less than the nominator or validator, but the reward in case of stopping the attack on the net will be solid. However, its size, anyway, one way or another depends on the amount of the deposit.
This option allows for a fairly large number of validators and incentivizes nominators to nominate contestants with a clean record for this role.
To create and republish data in the relay chain, the BABE cryptoblock generation and GRANDPA finalization scheme is used. Thus, an almost one hundred percent probability of creating a block in a certain period of time is achieved, and the provable and deterministic finalization of GRANDPA ensures the stability and immutability of the ecosystem. The system has a high throughput and random branching is practically impossible here. In addition, NPoS does not allow holders of large amounts to take full control of the verification nodes, guaranteeing the decentralization of the network. By the way, the Ethereum system after the Metropolis update is quite compatible with Polkadot, as well as other blockchains that support smart contracts or the Schnorr signature verification scheme.
Does it make sense for other ecosystems to become parachains, you ask. Yes, of course, this is primarily beneficial for young projects, but already established cryptocurrencies will benefit a lot. After all, developers will no longer need to constantly worry about protecting the parachain and can focus on introducing innovations.
Since the system runs on the Nominated Proof of Stake (NPoS) consensus algorithm, DOT mining on computing hardware is not possible. Tokens are earned by users involved in supporting the functionality of the ecosystem (validators, nominators and fishermen).
An innovative model with great promise
Simplified application integration scheme
High liquidity and functionality of DOT tokens
Professional development team
High security and good network scalability
Open source
The project is not yet completed. Some technical aspects are still under development
Lack of a serious user base necessary for full functioning
High competition
Technology
A blockchain is a growing list of records, called blocks, that are linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree). The timestamp proves that the transaction data existed when the block was published in order to get into its hash. As blocks each contain information about the block previous to it, they form a chain, with each additional block reinforcing the ones before it. Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.
Blockchains are typically managed by a peer-to-peer network for use as a publicly distributed ledger, where nodes collectively adhere to a protocol to communicate and validate new blocks. Although blockchain records are not unalterable as forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.
The blockchain was popularized by a person (or group of people) using the name Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency bitcoin, based on work by Stuart Haber, W. Scott Stornetta, and Dave Bayer. The identity of Satoshi Nakamoto remains unknown to date. The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain is considered a type of payment rail.
Private blockchains have been proposed for business use. Computerworld called the marketing of such privatized blockchains without a proper security model "snake oil"; however, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones.
Cryptographer David Chaum first proposed a blockchain-like protocol in his 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups." Further work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta. They wanted to implement a system wherein document timestamps could not be tampered with. In 1992, Haber, Stornetta, and Dave Bayer incorporated Merkle trees to the design, which improved its efficiency by allowing several document certificates to be collected into one block. Under their company Surety, their document certificate hashes have been published in The New York Times every week since 1995.
The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like method to timestamp blocks without requiring them to be signed by a trusted party and introducing a difficulty parameter to stabilize the rate at which blocks are added to the chain. The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network.
In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (gigabytes). In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The ledger size had exceeded 200 GB by early 2020.
The words block and chain were used separately in Satoshi Nakamoto's original paper, but were eventually popularized as a single word, blockchain, by 2016.
According to Accenture, an application of the diffusion of innovations theory suggests that blockchains attained a 13.5% adoption rate within financial services in 2016, therefore reaching the early adopters phase. Industry trade groups joined to create the Global Blockchain Forum in 2016, an initiative of the Chamber of Digital Commerce.
In May 2018, Gartner found that only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in the short-term "planning or [looking at] active experimentation with blockchain". For the year 2019 Gartner reported 5% of CIOs believed blockchain technology was a 'game-changer' for their business.
A blockchain is a decentralized, distributed, and oftentimes public, digital ledger consisting of records called blocks that is used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks. This allows the participants to verify and audit transactions independently and relatively inexpensively. A blockchain database is managed autonomously using a peer-to-peer network and a distributed timestamping server. They are authenticated by mass collaboration powered by collective self-interests. Such a design facilitates robust workflow where participants' uncertainty regarding data security is marginal. The use of a blockchain removes the characteristic of infinite reproducibility from a digital asset. It confirms that each unit of value was transferred only once, solving the long-standing problem of double spending. A blockchain has been described as a value-exchange protocol. A blockchain can maintain title rights because, when properly set up to detail the exchange agreement, it provides a record that compels offer and acceptance.
Logically, a blockchain can be seen as consisting of several layers:
infrastructure (hardware)
networking (node discovery, information propagation and verification)
consensus (proof of work, proof of stake)
data (blocks, transactions)
application (smart contracts/decentralized applications, if applicable)
Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree. Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain. This iterative process confirms the integrity of the previous block, all the way back to the initial block, which is known as the genesis block. To assure the integrity of a block and the data contained in it, the block is usually digitally signed.
Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks. Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of the history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially as more blocks are built on top of it, eventually becoming very low. For example, bitcoin uses a proof-of-work system, where the chain with the most cumulative proof-of-work is considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.
Block time
The block time is the average time it takes for the network to generate one extra block in the blockchain. Some blockchains create a new block as frequently as every five seconds. By the time of block completion, the included data becomes verifiable. In cryptocurrency, this is practically when the transaction takes place, so a shorter block time means faster transactions. The block time for Ethereum is set to between 14 and 15 seconds, while for bitcoin it is on average 10 minutes.
Hard forks
A hard fork is a rule change such that the software validating according to the old rules will see the blocks produced according to the new rules as invalid. In case of a hard fork, all nodes meant to work in accordance with the new rules need to upgrade their software. If one group of nodes continues to use the old software while the other nodes use the new software, a permanent split can occur.
For example, Ethereum has hard-forked to "make whole" the investors in The DAO, which had been hacked by exploiting a vulnerability in its code. In this case, the fork resulted in a split creating Ethereum and Ethereum Classic chains. In 2014 the Nxt community was asked to consider a hard fork that would have led to a rollback of the blockchain records to mitigate the effects of a theft of 50 million NXT from a major cryptocurrency exchange. The hard fork proposal was rejected, and some of the funds were recovered after negotiations and ransom payment. Alternatively, to prevent a permanent split, a majority of nodes using the new software may return to the old rules, as was the case of bitcoin split on 12 March 2013.
A more recent hard-fork example is of Bitcoin in 2017, which resulted in a split creating Bitcoin Cash. The network split was mainly due to a disagreement in how to increase the transactions per second to accommodate for demand.
By storing data across its peer-to-peer network, the blockchain eliminates a number of risks that come with data being held centrally. The decentralized blockchain may use ad hoc message passing and distributed networking. One risk of a lack of a decentralization is a so-called "51% attack" where a central entity can gain control of more than half of a network and can manipulate that specific blockchain record at will, allowing double-spending.
Peer-to-peer blockchain networks lack centralized points of vulnerability that computer crackers can exploit; likewise, it has no central point of failure. Blockchain security methods include the use of public-key cryptography. A public key (a long, random-looking string of numbers) is an address on the blockchain. Value tokens sent across the network are recorded as belonging to that address. A private key is like a password that gives its owner access to their digital assets or the means to otherwise interact with the various capabilities that blockchains now support. Data stored on the blockchain is generally considered incorruptible.
Every node in a decentralized system has a copy of the blockchain. Data quality is maintained by massive database replication and computational trust. No centralized "official" copy exists and no user is "trusted" more than any other. Transactions are broadcast to the network using software. Messages are delivered on a best-effort basis. Mining nodes validate transactions, add them to the block they are building, and then broadcast the completed block to other nodes. Blockchains use various time-stamping schemes, such as proof-of-work, to serialize changes. Alternative consensus methods include proof-of-stake. Growth of a decentralized blockchain is accompanied by the risk of centralization because the computer resources required to process larger amounts of data become more expensive.
Open blockchains are more user-friendly than some traditional ownership records, which, while open to the public, still require physical access to view. Because all early blockchains were permissionless, controversy has arisen over the blockchain definition. An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion.
Permissionlessness
An advantage to an open, permissionless, or public, blockchain network is that guarding against bad actors is not required and no access control is needed. This means that applications can be added to the network without the approval or trust of others, using the blockchain as a transport layer.
Bitcoin and other cryptocurrencies currently secure their blockchain by requiring new entries to include a proof of work. To prolong the blockchain, bitcoin uses Hashcash puzzles. While Hashcash was designed in 1997 by Adam Back, the original idea was first proposed by Cynthia Dwork and Moni Naor and Eli Ponyatovski in their 1992 paper "Pricing via Processing or Combatting Junk Mail".
In 2016, venture capital investment for blockchain-related projects was weakening in the USA but increasing in China. Bitcoin and many other cryptocurrencies use open (public) blockchains. As of April 2018, bitcoin has the highest market capitalization.
Permissioned (private) blockchain
Permissioned blockchains use an access control layer to govern who has access to the network. In contrast to public blockchain networks, validators on private blockchain networks are vetted by the network owner. They do not rely on anonymous nodes to validate transactions nor do they benefit from the network effect. Permissioned blockchains can also go by the name of 'consortium' blockchains. It has been argued that permissioned blockchains can guarantee a certain level of decentralization, if carefully designed, as opposed to permissionless blockchains, which are often centralized in practice.
Disadvantages of private blockchain
Nikolai Hampton pointed out in Computerworld that "There is also no need for a '51 percent' attack on a private blockchain, as the private blockchain (most likely) already controls 100 percent of all block creation resources. If you could attack or damage the blockchain creation tools on a private corporate server, you could effectively control 100 percent of their network and alter transactions however you wished." This has a set of particularly profound adverse implications during a financial crisis or debt crisis like the financial crisis of 2007–08, where politically powerful actors may make decisions that favor some groups at the expense of others, and "the bitcoin blockchain is protected by the massive group mining effort. It's unlikely that any private blockchain will try to protect records using gigawatts of computing power — it's time consuming and expensive." He also said, "Within a private blockchain there is also no 'race'; there's no incentive to use more power or discover blocks faster than competitors. This means that many in-house blockchain solutions will be nothing more than cumbersome databases."
Blockchain analysis
The analysis of public blockchains has become increasingly important with the popularity of bitcoin, Ethereum, litecoin and other cryptocurrencies. A blockchain, if it is public, provides anyone who wants access to observe and analyse the chain data, given one has the know-how. The process of understanding and accessing the flow of crypto has been an issue for many cryptocurrencies, crypto-exchanges and banks. The reason for this is accusations of blockchain enabled cryptocurrencies enabling illicit dark market trade of drugs, weapons, money laundering etc. A common belief has been that cryptocurrency is private and untraceable, thus leading many actors to use it for illegal purposes. This is changing and now specialised tech-companies provide blockchain tracking services, making crypto exchanges, law-enforcement and banks more aware of what is happening with crypto funds and fiat crypto exchanges. The development, some argue, has led criminals to prioritise use of new cryptos such as Monero. The question is about public accessibility of blockchain data and the personal privacy of the very same data. It is a key debate in cryptocurrency and ultimately in blockchain.
In April 2016, Standards Australia submitted a proposal to the International Organization for Standardization to consider developing standards to support blockchain technology. This proposal resulted in the creation of ISO Technical Committee 307, Blockchain and Distributed Ledger Technologies. The technical committee has working groups relating to blockchain terminology, reference architecture, security and privacy, identity, smart contracts, governance and interoperability for blockchain and DLT, as well as standards specific to industry sectors and generic government requirements. More than 50 countries are participating in the standardization process together with external liaisons such as the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the European Commission, the International Federation of Surveyors, the International Telecommunication Union (ITU) and the United Nations Economic Commission for Europe (UNECE).
Many other national standards bodies and open standards bodies are also working on blockchain standards. These include the National Institute of Standards and Technology (NIST), the European Committee for Electrotechnical Standardization (CENELEC), the Institute of Electrical and Electronics Engineers (IEEE), the Organization for the Advancement of Structured Information Standards (OASIS), and some individual participants in the Internet Engineering Task Force (IETF).
Currently, there are at least four types of blockchain networks — public blockchains, private blockchains, consortium blockchains and hybrid blockchains.
A public blockchain has absolutely no access restrictions. Anyone with an Internet connection can send transactions to it as well as become a validator (i.e., participate in the execution of a consensus protocol). Usually, such networks offer economic incentives for those who secure them and utilize some type of a Proof of Stake or Proof of Work algorithm.
Some of the largest, most known public blockchains are the bitcoin blockchain and the Ethereum blockchain.
A private blockchain is permissioned. One cannot join it unless invited by the network administrators. Participant and validator access is restricted. To distinguish between open blockchains and other peer-to-peer decentralized database applications that are not open ad-hoc compute clusters, the terminology Distributed Ledger (DLT) is normally used for private blockchains.
A hybrid blockchain has a combination of centralized and decentralized features. The exact workings of the chain can vary based on which portions of centralization decentralization are used.
A sidechain is a designation for a blockchain ledger that runs in parallel to a primary blockchain. Entries from the primary blockchain (where said entries typically represent digital assets) can be linked to and from the sidechain; this allows the sidechain to otherwise operate independently of the primary blockchain (e.g., by using an alternate means of record keeping, alternate consensus algorithm, etc.).
Blockchain technology can be integrated into multiple areas. The primary use of blockchains is as a distributed ledger for cryptocurrencies such as bitcoin; there were also a few other operational products which had matured from proof of concept by late 2016. As of 2016, some businesses have been testing the technology and conducting low-level implementation to gauge blockchain's effects on organizational efficiency in their back office.
In 2019, it was estimated that around $2.9 billion were invested in blockchain technology, which represents an 89% increase from the year prior. Additionally, the International Data Corp has estimated that corporate investment into blockchain technology will reach $12.4 billion by 2022. Furthermore, According to PricewaterhouseCoopers (PwC), the second-largest professional services network in the world, blockchain technology has the potential to generate an annual business value of more than $3 trillion by 2030. PwC's estimate is further augmented by a 2018 study that they have conducted, in which PwC surveyed 600 business executives and determined that 84% have at least some exposure to utilizing blockchain technology, which indicts a significant demand and interest in blockchain technology.
Individual use of blockchain technology has also greatly increased since 2016. According to statistics in 2020, there were more than 40 million blockchain wallets in 2020 in comparison to around 10 million blockchain wallets in 2016.
Most cryptocurrencies use blockchain technology to record transactions. For example, the bitcoin network and Ethereum network are both based on blockchain. On 8 May 2018 Facebook confirmed that it would open a new blockchain group which would be headed by David Marcus, who previously was in charge of Messenger. Facebook's planned cryptocurrency platform, Libra (now known as Diem), was formally announced on June 18, 2019.
The criminal enterprise Silk Road, which operated on Tor, utilized cryptocurrency for payments, some of which the US federal government has seized through research on the blockchain and forfeiture.
Governments have mixed policies on the legality of their citizens or banks owning cryptocurrencies. China implements blockchain technology in several industries including a national digital currency which launched in 2020. In order to strengthen their respective currencies, Western governments including the European Union and the United States have initiated similar projects.
Blockchain-based smart contracts are proposed contracts that can be partially or fully executed or enforced without human interaction. One of the main objectives of a smart contract is automated escrow. A key feature of smart contracts is that they do not need a trusted third party (such as a trustee) to act as an intermediary between contracting entities -the blockchain network executes the contract on its own. This may reduce friction between entities when transferring value and could subsequently open the door to a higher level of transaction automation. An IMF staff discussion from 2018 reported that smart contracts based on blockchain technology might reduce moral hazards and optimize the use of contracts in general. But "no viable smart contract systems have yet emerged." Due to the lack of widespread use their legal status was unclear.
According to Reason, many banks have expressed interest in implementing distributed ledgers for use in banking and are cooperating with companies creating private blockchains, and according to a September 2016 IBM study, this is occurring faster than expected.
Banks are interested in this technology not least because it has potential to speed up back office settlement systems.
Banks such as UBS are opening new research labs dedicated to blockchain technology in order to explore how blockchain can be used in financial services to increase efficiency and reduce costs.
Berenberg, a German bank, believes that blockchain is an "overhyped technology" that has had a large number of "proofs of concept", but still has major challenges, and very few success stories.
The blockchain has also given rise to initial coin offerings (ICOs) as well as a new category of digital asset called security token offerings (STOs), also sometimes referred to as digital security offerings (DSOs). STO/DSOs may be conducted privately or on a public, regulated stock exchange and are used to tokenize traditional assets such as company shares as well as more innovative ones like intellectual property, real estate, art, or individual products. A number of companies are active in this space providing services for compliant tokenization, private STOs, and public STOs.
Blockchain technology, such as cryptocurrencies and non-fungible tokens (NFTs), has been used in video games for monetization. Many live-service games offer in-game customization options, such as character skins or other in-game items, which the players can earn and trade with other players using in-game currency. Some games also allow for trading of virtual items using real-world currency, but this may be illegal in some countries where video games are seen as akin to gambling, and has led to gray market issues such as skin gambling, and thus publishers typically have shied away from allowing players to earn real-world funds from games. Blockchain games typically allow players to trade these in-game items for cryptocurrency, which can then be exchanged for money.
The first known game to use blockchain technologies was CryptoKitties, launched in November 2017, where the player would purchase NFTs with Ethereum cryptocurrency, each NFT consisting of a virtual pet that the player could breed with others to create offspring with combined traits as new NFTs. The game made headlines in December 2017 when one virtual pet sold for more than US$100,000. CryptoKitties also illustrated scalability problems for games on Ethereum when it created significant congestion on the Ethereum network in early 2018 with approximately 30% of all Ethereum transactions[clarification needed] being for the game.
By the early 2020s there had not been a breakout success in video games using blockchain, as these games tend to focus on using blockchain for speculation instead of more traditional forms of gameplay, which offers limited appeal to most players. Such games also represent a high risk to investors as their revenues can be difficult to predict. However, limited successes of some games, such as Axie Infinity during the COVID-19 pandemic, and corporate plans towards metaverse content, refueled interest in the area of GameFi, a term describing the intersection of video games and financing typically backed by blockchain currency, in the second half of 2021. Several major publishers, including Ubisoft, Electronic Arts, and Take Two Interactive, have stated that blockchain and NFT-based games are under serious consideration for their companies in the future.
In October 2021, Valve Corporation banned blockchain games, including those using cryptocurrency and NFTs, from being hosted on its Steam digital storefront service, which is widely used for personal computer gaming, claiming that this was an extension of their policy banning games that offered in-game items with real-world value. Valve's prior history with gambling, specifically skin gambling, was speculated to be a factor in the decision to ban blockchain games. Journalists and players responded positively to Valve's decision as blockchain and NFT games have a reputation for scams and fraud among most PC gamers, Epic Games, which runs the Epic Games Store in competition to Steam, said that they would be open to accepted blockchain games, in the wake of Valve's refusal.
There have been several different efforts to employ blockchains in supply chain management.
Precious commodities mining — Blockchain technology has been used for tracking the origins of gemstones and other precious commodities. In 2016, The Wall Street Journal reported that the blockchain technology company, Everledger was partnering with IBM's blockchain-based tracking service to trace the origin of diamonds to ensure that they were ethically mined. As of 2019, the Diamond Trading Company (DTC) has been involved in building a diamond trading supply chain product called Tracr.
Food supply — As of 2018, Walmart and IBM were running a trial to use a blockchain-backed system for supply chain monitoring for lettuce and spinach — all nodes of the blockchain were administered by Walmart and were located on the IBM cloud.
There are several different efforts to offer domain name services via blockchain. These domain names can be controlled by the use of a private key, which purport to allow for uncensorable websites. This would also bypass a registrar's ability to suppress domains used for fraud, abuse, or illegal content.
Namecoin is a cryptocurrency that supports the ".bit" top-level domain (TLD). Namecoin was forked from bitcoin in 2011. The .bit TLD is not sanctioned by ICANN, instead requiring an alternative DNS root. As of 2015, it was used by 28 websites, out of 120,000 registered names. Namecoin was dropped by OpenNIC in 2019, due to malware and potential other legal issues. Other blockchain alternatives to ICANN include The Handshake Network, EmerDNS, and Unstoppable Domains.
Specific TLDs include ".eth", ".luxe", and ".kred", which are associated with the Ethereum blockchain through the Ethereum Name Service (ENS). The .kred TLD also acts an alternative to conventional cryptocurrency wallet addresses, as a convenience for transferring cryptocurrency.
Blockchain technology can be used to create a permanent, public, transparent ledger system for compiling data on sales, tracking digital use and payments to content creators, such as wireless users or musicians. The Gartner 2019 CIO Survey reported 2% of higher education respondents had launched blockchain projects and another 18% were planning academic projects in the next 24 months. In 2017, IBM partnered with ASCAP and PRS for Music to adopt blockchain technology in music distribution. Imogen Heap's Mycelia service has also been proposed as blockchain-based alternative "that gives artists more control over how their songs and associated data circulate among fans and other musicians."
New distribution methods are available for the insurance industry such as peer-to-peer insurance, parametric insurance and microinsurance following the adoption of blockchain. The sharing economy and IoT are also set to benefit from blockchains because they involve many collaborating peers. The use of blockchain in libraries is being studied with a grant from the U.S. Institute of Museum and Library Services.
Other blockchain designs include Hyperledger, a collaborative effort from the Linux Foundation to support blockchain-based distributed ledgers, with projects under this initiative including Hyperledger Burrow (by Monax) and Hyperledger Fabric (spearheaded by IBM). Another is Quorum, a permissionable private blockchain by JPMorgan Chase with private storage, used for contract applications.
Blockchain is also being used in peer-to-peer energy trading.
Blockchain could be used in detecting counterfeits by associating unique identifiers to products, documents and shipments, and storing records associated to transactions that cannot be forged or altered. It is however argued that blockchain technology needs to be supplemented with technologies that provide a strong binding between physical objects and blockchain systems. The EUIPO established an Anti-Counterfeiting Blockathon Forum, with the objective of "defining, piloting and implementing" an anti-counterfeiting infrastructure at the European level. The Dutch Standardisation organisation NEN uses blockchain together with QR Codes to authenticate certificates.
With the increasing number of blockchain systems appearing, even only those that support cryptocurrencies, blockchain interoperability is becoming a topic of major importance. The objective is to support transferring assets from one blockchain system to another blockchain system. Wegner stated that "interoperability is the ability of two or more software components to cooperate despite differences in language, interface, and execution platform". The objective of blockchain interoperability is therefore to support such cooperation among blockchain systems, despite those kinds of differences.
There are already several blockchain interoperability solutions available. They can be classified in three categories: cryptocurrency interoperability approaches, blockchain engines, and blockchain connectors.
Several individual IETF participants produced the draft of a blockchain interoperability architecture.
Blockchain mining — the peer-to-peer computer computations by which transactions are validated and verified — requires a significant amount of energy. In June 2018 the Bank for International Settlements criticized the use of public proof-of-work blockchains due to their high energy consumption. In 2021, a study conducted by Cambridge University determined that Bitcoin (at 121.36 terawatt-hours per year) uses more electricity annually than Argentina (at 121 TWh) and the Netherlands (at 108.8 TWh). According to Digiconomist, one bitcoin transaction requires about 707.6 kilowatt-hours of electrical energy, the amount of energy the average U.S. household consumes in 24 days.
In February 2021 U.S. Treasury Secretary Janet Yellen called Bitcoin "an extremely inefficient way to conduct transactions", saying "the amount of energy consumed in processing those transactions is staggering."[142] In March 2021 Bill Gates stated that "Bitcoin uses more electricity per transaction than any other method known to mankind", "It's not a great climate thing."
Nicholas Weaver, of the International Computer Science Institute at the University of California, Berkeley, examined blockchain's online security, and the energy efficiency of proof-of-work public blockchains, and in both cases found it grossly inadequate. The 31–45 TWh of electricity used for bitcoin in 2018 produced 17–22.9 MtCO2.
Inside the cryptocurrency industry, concern about high energy consumption has led some companies to consider moving from the proof of work blockchain model to the less energy-intensive proof of stake model.
In October 2014, the MIT Bitcoin Club, with funding from MIT alumni, provided undergraduate students at the Massachusetts Institute of Technology access to $100 of bitcoin. The adoption rates, as studied by Catalini and Tucker (2016), revealed that when people who typically adopt technologies early are given delayed access, they tend to reject the technology. Many universities have founded departments focusing on crypto and blockchain, including MIT, in 2017. In the same year, Edinburgh became "one of the first big European universities to launch a blockchain course", according to the Financial Times.
Motivations for adopting blockchain technology (an aspect of innovation adoptation) have been investigated by researchers. Janssen et al. provided a framework for analysis. Koens & Poll pointed out that adoption could be heavily driven by non-technical factors. Based on behavioral models, Li discussed the differences between adoption at the individual level and organizational levels.
Scholars in business and management have started studying the role of blockchains to support collaboration. It has been argued that blockchains can foster both cooperation (i.e., prevention of opportunistic behavior) and coordination (i.e., communication and information sharing). Thanks to reliability, transparency, traceability of records, and information immutability, blockchains facilitate collaboration in a way that differs both from the traditional use of contracts and from relational norms. Contrary to contracts, blockchains do not directly rely on the legal system to enforce agreements. In addition, contrary to the use of relational norms, blockchains do not require trust or direct connections between collaborators.
The need for internal audit to provide effective oversight of organizational efficiency will require a change in the way that information is accessed in new formats. Blockchain adoption requires a framework to identify the risk of exposure associated with transactions using blockchain. The Institute of Internal Auditors has identified the need for internal auditors to address this transformational technology. New methods are required to develop audit plans that identify threats and risks. The Internal Audit Foundation study, Blockchain and Internal Audit, assesses these factors. The American Institute of Certified Public Accountants has outlined new roles for auditors as a result of blockchain.
In September 2015, the first peer-reviewed academic journal dedicated to cryptocurrency and blockchain technology research, Ledger, was announced. The inaugural issue was published in December 2016. The journal covers aspects of mathematics, computer science, engineering, law, economics and philosophy that relate to cryptocurrencies such as bitcoin.
The journal encourages authors to digitally sign a file hash of submitted papers, which are then timestamped into the bitcoin blockchain. Authors are also asked to include a personal bitcoin address in the first page of their papers for non-repudiation purposes.
The pride of the Esil region are well-known people who, through their work and social activities, have contributed to the formation of the region, to its history.
For outstanding services during the fighting during the Great Patriotic War, they were awarded the title of Hero of the Soviet Union: Nesterenko Danil Potapovich, Sarybekyan Ishkhan Barsegovich.
The title of Hero of Socialist Labor was awarded to: Beloboky Alexander Andreevich - director of the Svobodny state farm, Bondar Nikolai Vasilievich - machine operator of the state farm "Krasivinsky", Gaurlik Arkady Vikentievich - director of the state farm "Moskovsky", Efimenko Fedor Lazarevich - combine operator of the state farm "Karakolsky", Kireev Nikolai Ivanovich - driver, Kopylov Vasily Evlampievich - machine operator of the Zarechny state farm, Lipatova Lyubov Andreevna - agronomist of the Mirny state farm. Myrzashev Rysbek - First Secretary of the Esil District Party Committee, Anatoly Rodionovich Nikulin - First Secretary of the Esil District Party Committee, Nechitailo Daniil Ivanovich - director of the state farm "Dalniy", Poltyanoy Alexander Petrovich -. farm operator "Kalachevsky", Sergazin Shaydakhmet - chairman of the district executive committee, Stepanov Vasily Stepanovich - director of the state farm "Krasivinsky".