Wednesday, September 29, 2010

Super-operator altar



10 trillion times a million billion times, never billion ... ...

When our high-performance computing repeatedly record, these figures in people's eyes, is only a symbol of national scientific and technological strength. They are high above, nothing to do with general business users in and can use the word "high-performance computing platform" in addition to the state special department that is large-scale industrial projects, such as large oil field, a large aircraft manufacturing.

This situation appears in the Shanghai Supercomputer Center have changed.

December 2000, the Shanghai Municipal Government for the construction of the Shanghai Supercomputer Center China. As the Super Computer Center is responsible for local government and advance human consciousness is defined Shanghai Super Computer Center became the "public-oriented computing platform", more and more users to high-performance computing with the "high touch."

Public computing platform development as a city an important symbol of modernization, to pierce various sectors of the industry chain, so that more human resources, capital, technology, rapid flow of up to bring huge economic and social benefits. 10 years, increasingly strong demand, as well as cloud computing, Internet of things, triple play and other industries the emergence of new opportunities, quickly ignited the passion of large-scale public computing platforms, the original service in the background of "computing platform" have sprung up to front, out of control.

Currently, Beijing, Shenzhen, Tianjin, Shenyang, Wuhan, Guangzhou, Jinan, Chengdu, Changsha ... ... cities are invested heavily in building Super Computer Center, from 10 trillion times a million to a million billion billion times, and even absolutely billion of large-scale super-computing centers blossom everywhere in the country, some cities is not only one.

Application of high performance computing altar floor, is a sign of progress in IT industry, the development of public computing platform is a regional economic inevitable choice for sustainable development. However, over the face of surging wave count, we ask:

Really need so many ultra-count center?

These expensive computing platform can really play worth it?

Who are these super machines in operation, who is in the use of ultra-ICC?

To this end, our reporter in-depth Shanghai, Chengdu, Lanzhou, trying to uncover the mystery of ultra-ICC.

Last June, officially approved by Ministry of Science agreed to establish a National Supercomputing Center in Shenzhen by the state investment 200 million yuan, was completed in 2010 in Shenzhen by the end of ten million billion calculations a super computer. Almost at the same time, Tianjin Binhai New Area and University of Defense Technology Cooperation Agreement signed by the Ministry of Science, Tianjin Binhai New Area, National Defense University jointly invested 600 million yuan, in the Binhai New Area of Tianjin build a national supercomputing center, developed petaflop supercomputer .

Recently, there were indications that Beijing will build a million trillion times larger Supercomputing Center, related issues are in preparation. In addition, Guangzhou, Shenyang, Chengdu, Changsha, Wuhan and other cities in almost all new or expansion of the Super Computing Center, targets are in 1,000,000,000,000,000 or even million trillion level.

As if overnight, the National Supercomputing Center at the blossom everywhere. We really have such a large computing needs? Do this, "Computer World" reporter visited Shanghai, Chengdu, Gansu and other places representative of the Supercomputing Center, which some have been successfully operating, and some still under construction , and these super-computer centers are facing the same problem - how the altar, into the ordinary.

Subsidize the user to find

90's of last century, people have a certain high-performance computing knowledge. Shanghai Meteorological Bureau found that the existing computing power has been unable to meet everyday computing needs, therefore, they are ready to purchase a new "big machine." When the program reported to the government procurement sector, the Shanghai Municipal Government to consider buying a "big machine" too costly, while the Weather Bureau's use of the frequency is not high, idle time can cause great waste of resources. If this high-level computer can be used as public facilities available to more users, you can play more effective. Therefore, the Shanghai Municipal Government put forward at that time with the "big machine" set up a public service platform, and thus created the concept of super-computing center. "We began to build in 1999 located in Zhangjiang Hi-tech Development Zone in Pudong, the Shanghai Supercomputer Center, in early 2001, the official carrier for the community." Shanghai Super Computing Center Director, Xi Zili introduced.

Xi Zili Supercomputing Center is a service of the first group of people in this industry for over ten years. When it comes to Shanghai Super Computer Center in the early expansion of the user operating difficulty, Xi Zili are marvelous: "It was a large amount of money the Government into operation this Supercomputing Center, if the use is not up on the mean failure, it means a lot of waste of capital, because hardware was already there are not being paid the. "The most difficult process early, is the need to find their own customers. "At first, I want to visit every week, 3 to 5 users to understand their background, operations and needs, to attract them to the center." Xi Zili said, sometimes even subsidize the user to use the machine.

The beginning, the Shanghai Super Computing Center was selected as the industrial users commercial aircraft company, but by that time coincided with the Commercial Aircraft Corporation is not the economy. In the mid-90s, commercial aircraft company bought IBM4381 big machine, but because the company limited funds, the project is not over, these machines have been arrayed not functioning. Although they want to move into the center, but neither money to pay time fee, not many programs can do. Upon learning this, Xi Zili said to each other, willing to pay 200,000 yuan in subsidies as part of its move into the center, This made the business.

In addition to few users, the capacity of the machine itself is also a constraint Shanghai Super Computing Center was a major development. Because the first batch of service in Shanghai Super Computer Center is a large machine divinity series supercomputer, due to the compatibility of the machine, limit the scope of potential applications and users. Until 2004, the Shanghai Super Computer Center ushered in the dawn of open architecture 4000A, the Twilight series and the machine architecture, software, operating systems are open and standardized, which means that the system can and international compatibility of some common software better . Compatibility issues resolved, since 2004, the Shanghai Super Computer Center users an unprecedented development. Last year, the Shanghai Super Computing Center has introduced a series of Dawning 5000A supercomputer to calculate the scale of 230 trillion. Today, Shanghai Super Computer Center of the users have over the various sectors.



Large-scale hardware computing platforms require matching software program

At present, Shanghai Super Computer Center is the operator of the most successful public computing platforms. Unfortunately, the state has invested dozens of ultra-ICC, and now is running out. In addition to super-computing and Shanghai Super Computing Center of the few survival, some on the verge of collapse.

Who is the "super user"

"Only together can really play a role, reflecting the value of public computing platform, which is also government investment to establish ultra-ICC mind." Xi Zili Moreover, the "Computer World," told reporters.

Today, many are building super-computer centers have understood this truth. Although the petaflop-scale supercomputer center in Tianjin has not officially put into use, but the center's leadership among the users already running.

Tianjin Super Computer Center to target the super-computing on demand a strong weather, oil, medicine, architecture and other fields. Therefore, the state director of the Center for Supercomputing Liu Guangming Tianjin Tianjin Meteorological Bureau has research, the Chinese Academy of Architecture building software, the Joint Research Institute of Tianjin International Biomedicine, CNOOC, and Geophysical Research Institute of Shengli Oilfield. He found that these "super big operators" and is very different than in the past, had no money, no one's company, now not only no shortage of talent, and some even built their own computer center, they are super-computing platform for the public What will generate strong demand?

"Demand is still very large." As Xi Zili said, the small-scale processing operation can be completed in their own computing centers, large-scale computing and use to large commercial software project, it is necessary to large-scale public computing platform to run , because only Supercomputing Center have enormous computing power and software capabilities.

Journalists in the survey found that ultra-ICC more than 80% of users are research institutes and universities, while the other 20% are used for industrial production.

Nanjing, BGP is a typical Shanghai Super Computer Center industrial-type users. In fact, Nanjing BGP also has its own computing center, but they often have more large-scale computing needs. One year, Nanjing BGP to participate in an international bid, must within one week on their results to each other, the scale of its computing centers are not enough to complete the operation of this size, so they found a Shanghai Super Computer Center. Xi Zili personally lobbied other users with savings 1000 CPU released to the project in Nanjing BGP, so that they get the results in time to bid. "If there is no such large-scale public computing platform, enterprises will miss many of these large-scale international projects." Xi Zili said.

Therefore, the demand for supercomputing business is very large. Super Computer Center in Gansu Shanghai Super Computer Center, though the much smaller scale, but the value of their customers were well reflected.

Super Computer Center in Gansu Province, according to Director of Hu Tiejun introduced Super Computer Center in Gansu during the construction process and take the side of the building while using the strategy. "Although we do not center, but it focus on forward-looking technology, expansion in 2004 when playing on the plan to create a breakthrough in high performance computing to the next generation IPv6 Internet and data exchange center for the auxiliary network application platform and minutes to complete the construction year. "

Now, Gansu Super Computer Center already has 41 trillion times the fleet, 21 sets of commercial software, 13 sets of shared source software, but also have included Lanzhou University, Lanzhou University, Lanzhou Jiaotong University and other universities, Gansu Provincial Meteorological Bureau, Chinese Academy of Sciences Cold and Arid Regions Research Institute and other government departments and research institutes, and enterprises including wide range of users. In these users, the most targeted, best embodies the central key support at this stage direction, is the cooperation with Lanzhou University, a large-scale virtual screening for drug research.

Drug development is a time-consuming, investing in research work. The traditional drug development process is costly, time, long, high rate of elimination, the average cost of a new drug research and development needs of over one billion U.S. dollars, took about 10 years, about 90% of drug candidates in clinical period be eliminated. Virtual screening in drug stage, should the millions or billions of molecular simulation, the traditional test method is not only a huge workload, but also consume a long time. "The super computer will show the unparalleled advantage." According to Hu Tiejun introduced experimental supercomputer simulation with a very short time, just a few weeks time could be eliminated in a large number of compounds that do not meet the requirements, the scope of screening should much larger than the traditional test, experimental results are more accurate, can greatly improve the efficiency of drug development, considerable savings in R & D funding.

Shanghai Super Computing Center of the Xi Zili, Gansu ultra-ICC Hu Tiejun said that they have achieved the current application of 70% to 80%, in a sense, basically at full capacity. However, there are still doubts the industry, urban construction in the country so many quadrillion times as much, or even 10 thousand trillion times the size of the super computer center, you can really come in handy?

Weak links in software applications

Currently, the application of ultra-ICC is far below the scale of the hardware scale. So many of the industry that "if the application can not keep up, causing the machine and then not as big."

"If there is no advance of hardware resources, making sure that no more applications." Xi Zili did not agree to this view. He believes that it is difficult to define all the urban construction Supercomputing Center's original intention and ability to not rule out that some local governments to follow suit, the image projects out of psychological, "but the public do more advanced computing platforms. In the supercomputing field, needed to drive the hardware development of the industry. "

Xi Zili that only the hardware first developed, these applications can keep pace. If the current super-computing platform to reach 100 trillion times the size it possible to run the 100 trillion times the software. "So supercomputer is certainly necessary, but not too much ahead, as it will cause a waste."

About to build a petaflop supercomputer, "Milky Way," is Professor Wen-Hua Dou National Defense University also believes that human needs for high-performance computers is not enough, every step needs, from basic theory to practical application of technology, material technology and innovation and beyond.

First, the public computing platform promoting the development of high-tech industry has far-reaching significance. The high ground is the high-tech industry high-tech product development capability, the design of new materials, biotechnology, new medicine and environmental protection and comprehensive utilization of resources, the high performance computing can play a significant role. Second, the public computing platform innovation vector as one of the modern service industry, will certainly fueled the development of independent innovation of enterprises. "The Internet and telecommunications networks, wide network integration is an inevitable trend, network integration and development, will effectively promote the new computing models, the new service model for the formation and development." DOU Wen-hua said.

However, many in construction and have built small and medium regional based public computing platforms, as well as parts of a government campaign of the planned petascale supercomputing centers, in addition to slightly stronger than the strength Shanghai Super Computing Center, and many provinces and municipalities in public computing platform infrastructure, support services, operation mechanism have not been systematically developed. In short, it is the application part is still quite weak.

"Sometimes, the domestic Super Computing Center is very tragic." Xi Zili told reporters. Although the Shanghai Super Computer Center of the computing scale is 200 trillion times, but usually reach two trillion times a day, 10 trillion times the size of very good use, and most of the time also, but 50 trillion times.

Why is this so? "Because my super-operator in the field of software levels are poor, do not increase the system's application of the scale." Xi Zili said.

For example, the United States, the advanced field of materials analysis 10 times higher than China or even a hundred times, its ultra-ICC very large scale operations, the basic are 5 100 000 CPU to run, but our country only a few hundred or at most a day 1000 CPU running simultaneously. "Is not worse than others we have available hundreds of times, but our high-performance computing software in the very backward, parallel computing capacity is also very poor." Chengdu Cloud Computing Center Director Wang Jianbo resignation. Even if the construction of the huge hardware computing platforms, lack of a strong software support, "these machines are no different from a pile of junk." Jian-Bo Wang said.

In this case, my ultra-ICC can only buy commercial software developed countries, and some very expensive proprietary software, and even build a hardware platform than the more expensive, the average ultra-ICC simply can not afford. There are some high-tech products, as restrictions on foreign export policy, even if the money will not buy.

According to related person from the past 10 years, although the country has invested in software, a lot of money, but the effect is not significant. In addition to weather, oil and other several pillar industries have a certain software R & D capabilities, large-scale, commercial software is still very little.

The reason, Super Computer Center is an interdisciplinary intersection, which involved extensive industry, its "mother" will be more than one, the software and hardware resources is not in charge of the same ministries, which do not result in development of software understand the hardware architecture, hardware R & D people do not know the characteristics of large-scale software, "This is one of the most fatal problem." Xi Zili said.

However, in the "nuclear high base" and after the introduction of major national science and technology projects to the core of electronic devices, high-end general chips and basic software become bigger and stronger, the state appointed the appropriate body to coordinate the work of ministries. "This is the development of China's public computing platform is a good thing, we hope the integration of industry and more in-depth information." Jian-Bo Wang said.

Commentary

Super Computer Center requires government guidance and help

Super Computer Center is a comprehensive platform for interdisciplinary nature, its development can lead server, software, chips, machinery manufacturing, and other related industries to progress together. Meanwhile, the ultra-ICC also adopted the final results to show progress in all walks of life: the first supercomputer to find oil, the first supercomputer for weather forecasting for the first time with a super-computer analysis of gene ... ... supercomputing affect all walks of industry, it has become the core countries in science and technology competitiveness.

Although we are pleased to see that everywhere in the brewing of large super-computer center, but in a way, the government should guide the conduct throughout, and somewhat in size and geographical planning. Super Computer Center is different from the small and medium construction as possible, blossom, it is a huge project, costs money, time-consuming, labor-intensive. According to reports, Shanghai Super Computer Center of electricity a year reached 12 million yuan. Therefore, the Government should be in power, manpower, policy areas such as auxiliary super-computer center operations.

In addition to high-end applications in the most backward areas of level, our public computing platforms, there are still many problems.

First, the uneven geographical distribution. This uneven distribution of resources, causing a dilemma - there is a demand of the user difficult to obtain resources, valuable resources are faced with idle and waste.

Second, the construction of a lack of unified planning and functions. Super Computer Center under the various different departments related to repeat invest in economically developed regions, and the parts of public computing platform services location ambiguity, the lack of a specific subject area strengths.

Third, do not take cross-disciplinary research in the field service functions.

Fourth, do not improve the industrial chain of high-performance computing. Directly serve the public end-user computing platform, a concrete understanding of user needs, application features and technology trends. Public computing platform is high performance computing hardware and software vendors the main users of public computing platform as a key link in the industry, must maintain the entire eco system of joint development.

However, these problems are not their own super-computer centers can be resolved, right in the hands of their master. For example, the state can plan to control Super Computer Center's construction site, the future super computer centers to form a wider network coverage, can users of radiation to the whole country; In addition, although there are inherent Super Computer Center's "瀹為檯 鎿嶄綔 You Shi" but not "education" and the qualification, in this regard, policies should be introduced to Super Computer Center to train a large number of industry professionals.

In short, public computing platform, which reflects the computing power of our country, but also the equivalent of a high-performance core of the industrial chain, and its progress should be attached great importance to the relevant agencies. (Text / Liu Lili)

Link

Trend of the civilian population of the world's high-performance computing

Currently, high-performance computers to solve face scalability, reliability, power, balance, programmability, and management complexity of such challenges, the industry is to promote high-performance, multicore, virtualization and other technologies. A worldwide movement to high-performance computing has been opened civilians, which we call "pan-performance computing era."

From the global perspective, the public computing platforms deployed in the developed countries already have a large number of which the United States, the largest number. In November 2008 announced the "Global Top 500 high performance computers," shows that 58.2% of the machines installed in the United States, 66% of the total computing power controlled by the United States; followed by the UK, with 9% and 5.4% of the machines The total computing power.

In the United States, government agencies, public computing platform is a major supporter. The most well-known in the United States including the San Diego Supercomputer Center Supercomputer Center (SDSC), the National Center for Supercomputing Applications (NCSA), Pittsburgh Supercomputing Center (PSC), Lawrence Livermore National Laboratory (LLNL ), the U.S. Argonne National Laboratory (ANL) and Oak Ridge National Laboratory (ORNL) and so on.

In the EU, each of the framework of EU research and technological development projects, both invested heavily in high-performance computing. Britain is Europe's largest supercomputing users, mainly Edinburgh Parallel Computing Centre (EPCC) and the University of Manchester academic computing services center (CSAR). Germany, the number of supercomputers installed very basic and the United Kingdom, 3 National Supercomputing Center are the University of Stuttgart High Performance Computing Center (HLRS), John von Neumann Institute for computing (NIC) and the calculation of Munich Leipnitz Center (LRZ). France followed by Germany and the UK, the largest supercomputers run by the French Atomic Energy Commission. Other European countries have had less number of super-computing center. Overall, the European Super Computing Center in facilities, operating model, customer support and application in the fields have a lot of features.

銆??鍦ㄦ棩鏈紝杈冨ぇ鐨勮秴绾ц绠椾腑蹇冩湁鏃ユ湰鍦扮悆妯℃嫙鍣ㄤ腑蹇冦?鏃ユ湰鐗╃悊涓庡寲瀛︾爺绌舵墍(RIKEN)銆佹棩鏈浗绔嬪厛杩涘伐涓氱鎶?爺绌舵墍锛屼互鍙婃棩鏈浗瀹跺畤鑸疄楠屽(JAXA)寤虹珛鐨勪腑蹇冦?

Currently, the developed high-performance computing research and industry input intensity of Ju Tai, Erju into Juyouchixu Xing, Shi Jian Zhang span, which made them high-performance computing research and technology Fangmian have a good foundation has accumulated a wealth of experience, gather a group of professionals. At the same time, high-performance computing on the contribution of national economic construction is also rising, the development of public computing has become a virtuous circle.







相关链接:



My Favorite Cartoons - Screen Savers



Gmail frequently dropped a solution



reviews Themes And Wallpaper



Fast switch input 3 ax



MPEG to 3GPP



Eclipse + JBoss + EJB3 Entity Bean's connection strategy



Light: Cold Chain Dancer



Great Promise Variant E Virus File



2006 'China's PDM / PLM's top event will be Held in Nanjing



Evaluate Audio Presentation Tools



Hot Games Board



AVI TO DivX



3G2 to MOV



Pao GS-816FC Fiber Disk Array



The world's Major media on the views of Chrome OS



Thursday, September 16, 2010

DDOS DDOS tracking the introduction and





Chain-level test (Link Testing)

Most of the tracking technologies are starting from the closest to the victim's router, and then began to check the upstream data link, until you find the origin of attack traffic hair. Ideally, this process can be recursive implementation of the attack until you find the source. This technique assumed attack remains active until the completion of tracking, it is difficult after the attack, intermittent attacks or attacks on the track adjustment to track. Including the following two chain-level testing:

1, Input debugging

Many routers offer Input debugging features, which allow administrators to filter certain number of exit data packets, and can decide who can reach the entrance. This feature was used as a traceback: First of all, victim was attacked in determining when all packets from the description of the attack packet flag. Through these signs in the upper reaches of the outlet manager configuration suitable Input debugging. This filter will reflect the relevant input port, the filtration process can continue in the upper class, until to reach the original source. Of course, a lot of this work by hand, some foreign ISP tools for the joint development of their network can automatically follow-up.

But the biggest problem with this approach is the management cost. Multiple ISP links and cooperation with them will take time. Therefore, this approach requires a lot of time, and almost impossible.

2, Controlled flooding

Burch and Cheswick proposed method. This method is actually manufactured flood attacks, by observing the state of the router to determine the attack path. First of all, there should be an upper road map, when under attack, they can start from the victim's upstream routers in accordance with road map on the upstream routers to control the flood, because the data packets with attack-initiated packet router also shared, thus increasing the possibility of the router packet loss. Through this continued up along the road map for, we can close the source of attacks launched.

This idea is very creative but also very practical, but there are several drawbacks and limitations. The biggest drawback is that this approach is itself a DOS attack, it will also carry out some of the trust path DOS, this shortcoming is also difficult procedure. Moreover, Controlled flooding requires an almost covers the entire network topology. Burch and Cheswick also pointed out that this approach could be used for DDOS attacks on the track. This method can only be effective on the ongoing situation in the attack.

CISCO router is CEF (Cisco Express Forwarding) is actually a kind of chain-level test, that is, to use CEF up to the final source, then the link on the router had to use CISCO routers, and support CEF. Must be Cisco 12000 or 7500 series router has. (Do not know how, do not check the latest CISCO document), but the use of this feature is very cost resources.

In the CISCO router (ip source-track support for the router) the IP source tracking in order to achieve the following steps:

1, when the purpose was found to be attacked, opened on the router the destination address of the track, enter the command ip source-track.

2, each Line Card was created to track the destination address specific CEF queue. The line card or port adapter with a specific ASIC for packet transformation, CEF queue is used to package into line card or port adapter's CPU.

3, each line card CPU collect information to track the purpose of communication

4, the timing data generated by export to the router. Be realistic summary of the flow of information, enter the command: show ip source-track summary. Each input interface to display more detailed information, enter the command show ip source-track

5, statistical tracking of IP addresses is a breakdown. This can be used to analyze the upstream router. You can close the current router IP source tracker, enter the command: no ip source-track. And then re-open at the upstream router on this feature.

6, repeat steps 1 through 5, until you find the attack source.

This almost answers securitytest to mention the bar.

Logging

Through this method to record the main data packet router, and then through the data collection techniques to determine the path packets through. While this approach can be used to track the data after the attack, it also has a Ming Xian's shortcomings, such Kenengyaoqiu Daliang of Zi Yuan (or sampling), a large number of data of Syndicated news Bingjuduifu problem.

ICMP tracking

This approach mainly rely on self-generated ICMP router tracking information. Each router has a very low probability (for example: 1 / 200000), the contents of the packet will be copied to an ICMP message in the package, and contains the information near the source address of the router. When the flood attacks beginning, victim can use ICMP messages to reconstruct the attacker path. In this way comparison with the above description, there are many advantages, but there are some disadvantages. For example: ICMP traffic may be filtered from the ordinary, and, ICMP messages should follow the same input debugging feature (the packet with the data packet input port and / or to get the MAC address associated capacity) related, but that in some router has no such function. At the same time, this approach also must be a way to deal with an attacker could send a forged ICMP Traceback message. In other words, we can approach this way, used in conjunction with other tracking mechanisms to allow more effective. (IETF iTrace)

This is the yawl that the IETF working group to study the content, when I made some comments to the Bellovin, but did not get an answer. For example:

1, although a random 1 / 20000 to track packages sent, but the package for forgery TRACEBACK cases, the efficiency of the router will have some effect.

2, track packages, and can not solve the counterfeit problem of authentication. To determine whether it is fake because the package, you must go to certification, and increased workload.

3, even with NULL authentication, also serve the purpose of (a certified case). And will not be much affected.

4, itrace purpose is to deal with the original DOS source of the problem of deception, but now the design seems to make us more concerned about the path and not the source. Is the path is more than the source of our problem to solve DOS useful?

So, there is a bunch of issues that I think iTrace will face the difficult issue.

Packet Marking

The technology concept (because there is no practical) is to the existing agreement on the basis of changes, and changes very little, not like the idea of iTrace, think better than iTrace. There are many details of this tracking study, the formation of a variety of labeling algorithm, but the best is compressed edge sampling algorithm.

Principle of this technique is a change in IP header, in which the identification heavy domain. That is, if not used to the identification domain, then this field is defined as the tag.

The 16bit of idnetification into: 3bit the offset (allows 8 slice), 5bit the distance, and the edge of 8bit slice. 5bit the distance allows 31 routes, which for the current network is already enough.

Marking and path reconstruction algorithm is:

Marking procedure at router R: let R''= BitIntereave (R, Hash (R)) let k be the number of none-overlappling fragments in R''for each packet w let x be a random number from [0 .. 1 ) if xlet o be a random integer from [0 .. k-1] let f be the fragment of R''at offset o write f into w.frag write 0 into w.distance wirte o into w.offset else if w . distance = 0 then let f be the fragment of R''at offset w.offset write f? w.frag into w.frag increment w.distance
Path reconstruction procedure at victim v:
let FragTbl be a table of tuples (frag, offset, distance) let G be a tree with root v let edges in G be tuples (start, end, distance) let maxd: = 0 let last: = v for each packet w from attacker FragTbl.Insert (w.frag, w.offset, w.distance) if w.distance> maxd then maxd: = w.distance for d: = 0 to maxd for all ordered combinations of fragments at distance d construct edge z if d! = 0 then z: = z? last if Hash (EvenBits (z)) = OddBits (z) then insert edge (z, EvenBits (z), d) into G last: = EvenBits (z); remove any edge (x, y, d) with d! = distance from x to v in G extract path (Ri.. Rj) by enumerating acyclic paths in G


Under laboratory conditions only victim of such markers can be caught from 1000 to 2500 package will be able to reconstruct the entire path, and should be said that the result is good, but not put to practical, mainly manufacturers and ISP router support needed .

Ip traceback's been almost a practical technology and laboratory techniques, or inanimate, on the main these, although there are other.

For a long time did not engage in a DDOS against it, and the domestic like product have a black hole, previously know some foreign, such as floodguard, toplayer, radware so. Prompted by securitytest also learned riverhead, I immediately look at their white paper.

Bigfoot made since the previous main ip traceback subject, securitytest also went to the defense. DDOS problem for ip traceback and Mitigation is not the same, ip traceback main track, mainly because of DDOS spoof, which is difficult to determine the real source of attack, and if the attack is easy to find the real source, not just to deal with DDOS, attacks against the other is also helpful, such as legal issues. And Mitigation is the angle from the victims, because the victim is generally unable to investigate the whole network, to identify source, and even be able to find the source, there must be a legal means of communication or to source stop (the attack source and not the source of the attacker), this means that a lot of communication, inter-ISP, across other similar non-technical issues, it is often difficult to handle. But from the victim's point of view, have to be a solution, so we need to Mitigation.

This in turn happens to be my previous scope of the study, therefore, will say a lot. For Mitigation, in fact, the fundamental technology is to a large number of flows from the attack packets and legitimate packets will be separated out, the attack packets discarded out for the approval of the legal package. This is not, so the actual use of technology is to identify how the attack packets as possible, but as small as possible to affect the normal package. This is again to analyze the DDOS (or DOS) of the methods and principles. Basic has the following forms:

1, the system hole formation DOS. This feature fixed, detection and prevention are also easy to

2, protocol attacks (some deal with system-related, some related with the agreement). Such as SYN FLOOD, debris, etc.. Features Fortunately, the detection and prevention is relatively easy. Such as SYN COOKIE, SYN CACHE, debris can be discarded. Such as land attack, smurf, teardrop, etc.

3, bandwidth FLOOD. Waste flow plug-bandwidth, feature poor recognition, defense is not easy

4, the basic legal FLOOD. More difficult than three, such as distribution of Slashdot.

Real DDOS, usually combining a variety of ways. For example SYNFLOOD, may also be bandwidth FLOOD.

The main factors that affect the defense is to see whether the features available, such as 1,2 relatively easy to solve, some of the basic does not affect the use of the FLOOD, it can well be abandoned, such as ICMP FLOOD. However, the attack packets if contracting tools to better package disguised as legitimate, it is difficult to identify out.

Mitigation methods in general is:

1, Filter. For obvious characteristics, such as some worms, the router can handle that. Of course, the filter is the ultimate solution, as long as the identification of the attack packets, it is to filter out these packets.

2, random packet loss. Associated with the random algorithm, a good algorithm can make the legitimate packets are less affected

3, SYN COOKIE, SYN CACHE other specific defensive measures. For some regular means of defense and attack filtering. For example ICMP FLOOD, UDP FLOOD. SYN COOKIE are all to avoid spoof, at least there are three TCP handshake, so better to judge SPOOF

4, passive neglect. It can be said to be deceived is also a way to confirm that. The normal connection fails will try again, but the attackers generally do not try. So can temporarily abandoned for the first time the connection request and a second or third connection request.

5, take the initiative to send a RST. Against SYN FLOOD, such as on a number of IDS. Of course, the real is not valid.

6, statistical analysis and fingerprints. It would have been the main content, but in the end the algorithm into a dead end, because the main problem is an algorithm. Through statistical analysis point of view to get the fingerprint, and then to abandon the attack fingerprint package is also a anomaly detection technology. Very simple, but it is not easy to affect the legal package, and will not become a random packet loss. (In fact it was considered too complex, have to be a detailed analysis of the attack packets and legitimate packets, the actual need, as long as the attack packets to filter out enough, even to attack packets through, but as long as not to cause DOS on it.) This is also a lot of The main subject of the researchers, the purpose is identifying attack packets.

Now back to securitytest mentioned riverhead. On the riverhead of the technology, I have just learned from their white paper on, but based on my analysis methods did not exceed the above-mentioned range.

riverhead's core program is the detection of Detection, transfer Diversion and mitigation Mitigation, which is to detect attacks, and then transferred to the traffic guard on their products, and then guard for Mitigation.

Its implementation steps are:

Because there is no map, we first define what can be said clearly:

# Source close to distributed denial of service for the remote router routers

# Close to the victim's router to router proximal

# Riverhead's Guard equipment subsidiary subsidiary router router installed

Defense steps

1, first detected in a DDOS place and understand the victim

2, Guard Notice to the remote router to send BGP (BGP circular set in the victim's prefix, and get higher than the original priority notice BGP), said the victim from the remote router to have a new route, and routed to the loopback Guard interface, all to the victim's have been transferred to the subsidiary Guard on the router

3, Guard inspection flow, and remove one of the attack traffic, and then forwarded to the traffic safety sub router, in the back victim

The core is the Guard, technology is described in the MVP architecture white paper (Multi-Verification Process), which is five levels below

Filter (Filtering): This module contains the static and dynamic DDOS filtering. Static filtering, blocking non-essential traffic, which can be user-defined or default riverhead provided. Dynamic filtering is based on the details of behavior analysis and flow analysis, by increasing the flow of the recognition of suspicious or malicious traffic blocking has been confirmed to be real-time updates

Anti-cheat (Anti-Spoofing): This module verify whether the packet into the system to be deceived. Guard uses a unique, patented source verification mechanism to prevent cheating. Also adopted a mechanism to confirm the legitimate flow of legitimate data packets to be discarded to eliminate

Anomaly detection (Anomaly Recognition): The module monitors all anti-cheat has not been filtered and discard the flow module, the flow records with the normal baseline behavior, it is found abnormal. The idea is that through pattern matching, different from the black-hat and the difference between legitimate communications. The principle used to identify the attack source and type, and proposed guidelines for interception of such traffic.

Anomaly detection include: attacks on the size of packet size and flow rate of the distribution of packet arrival time of the port distribution of the number of concurrent flow characteristics of a high-level agreement, the rate of entry
Traffic Category: Source IP Source port destination port protocol type connection capacity (daily, weekly)

Protocol Analysis (Protocol Analysis): The anomaly detection module processing found in the application of suspicious attacks, such as http attack. Protocol analysis also detected a number of agreements misconduct.

Traffic restrictions (Rate Limiting): mainly those who consume too many resources dealing with the source of traffic.

So, in fact the most important content is in the statistical analysis of anomaly detection, but it seems not much to see from the above special place, but must have a good algorithm. Such as FILTER, actually deal with some very familiar features of obvious attacks, anti-cheating is against syn flood like this, perhaps also a syn cookie module, but may have more patented technologies. Protocol analysis should in fact is relatively weak, but can be common agreement on some specific attacks, protocol error detection and identification of some acts simply agreed to check that this is very simple. Traffic restrictions are that a random packet loss, the most helpless way, so the final level.

Because this product is mainly for Mitigation, not ip traceback. But can be determined or there are important issues, such as:

1, how to deal with the real bandwidth flood. If the router is gigabit, but attacks have accounted for 90% of the traffic, only to shed 10% of the legitimate use, the router has first started with random packet loss of the Guard. (No way, this is the bottleneck of all defense technology)

2, the real attack. The real attack is difficult or not identifiable. For example, the same basic form with the normal, if and statistics are very similar, it is difficult to distinguish. Some attacks, such as reflective of the e-mail attacks, it is perfectly legal, but very hard to classify them.







Recommended links:



Compare Personal Interest



xbox 360 AVCHD



Zhang Feng: NAS Really How It?



PERFORMANCE appraisal process may wish to "quick" point



"Nobunaga's Ambition 12 Innovation" 82 Hokkaido start a battlefield report



M4v



When The "vision" Into A "trap"



convert avi to mp4 online



3g2 to Mpg



Infomation File And Disk Management



Good efficacy is the Man Manao out



Vb6 how to dynamically add controls



Official Air Strike 2 Cheats