Nous nous inscrivons dans la volonté de faire d'Internet un lieu sûr et fiable pour commercer et communiquer.
At Verisign Labs, research is not just for the sake of exploration, but to develop technologies that will play a significant role in the evolution of the Internet. Our research spans a wide range of technical disciplines and our researchers collaborate closely with engineering, platform developers, data architects and operations experts. Verisign Labs initiatives are deeply embedded in Verisign's business areas. A selection of our ongoing research projects is listed here, along with the publications of individual Verisign Labs researchers.
A new protocol is being designed from the ground-up to address the deficiencies of WHOIS. That protocol is known as the Registration Data Access Protocol, or RDAP. Verisign’s Registry Services Lab has been actively involved with IETF and ICANN efforts to support RDAP standardization and adoption.
DNS-based Authentication of Named Entities (DANE) is a suite of protocols being standardized by the IETF to enhance Internet security by allowing keys to be placed into DNS and secured by DNSSEC. DANE is a relevant security solution for deployment in today’s Internet, and it is ready for use. Putting DANE into DNS zones, lets authorities extend authentication from DNSSEC data to DNS-reliant services (e.g. S/MIME, TLS, IPSec). Verisign Labs researchers are working with the Internet community to develop prototypes and reference implementations, advance standards, and promote awareness of DANE and its full potential to advance secure key learning.
The TCP initiative is working to enhance DNS-over-TCP to become closer to par with DNS-over-UDP performance, and to allow new DNS functionality for privacy with DNS-over-TLS. Verisign Labs researchers are working with the Internet community to develop prototypes and reference implementations, conduct measurements and analysis, and advance standards.
The DNS libraries used in present-day operating systems emerged some decades ago, therefore much of the power of the modern DNS is difficult or impossible to access from most end-systems. This deficit is a major obstacle for large-scale adoption of DNSSEC. The objective of the getdns research is overcoming this obstacle and another one: the finding by applications developers that DNS features are not application-friendly. getdns implements an API specification that was developed by applications experts. This API specification is maintained in an open community process lead by those applications experts. Join the discussion mailing list.
The researchers at Verisign Labs and NLNet Labs lead the getdns library Open Source team, with many collaborators participating. Notable features are full DNSSEC support, including a validating stub mode; support for DNS-over-TLS; bindings for application languages; native support for asynchronous events, including native modes for Python and node.js; and easy extensibility for new Resource Records.
Internet Distributed Denial of Service (DDoS) attacks are widespread but hard to defend against due to the volatility of the attacking methods and patterns used by attackers. Most DDoS attacks are launched by botnets, a network of infected machines under the control of a malicious entity. As defenses are deployed, attacks evolve and become more sophisticated to circumvent those defenses.
Verisign Labs’ Aziz Mohaisen, is collaborating with Wentao Chang, An Wang, and Professor Songqing Chen from George Mason University’s Computer Science Department on the measurement and analysis of DDoS attacks and botnets. In their paper, Delving into Internet DDoS Attacks by Botnets: Characterization and Analysis, they present an in-depth analysis based on 50,704 different Internet DDoS attacks directly observed in a seven-month period. Their results provide new insights for understanding and defending against modern DDoS attacks at different levels (e.g., organization and country). This research will be presented at the IEEE Conference on Dependable Systems and Networks.
In their study of botnets, these researchers analyze some of the most active botnets on the Internet based on a public dataset collected over a period of seven months. In the paper, Measuring Botnets in the Wild: Some New Trends, they examine and compare the attacking capabilities of different families of today’s active botnets. Their analysis reveals that different botnets start to collaborate when launching DDoS attacks.
DANE (DNS-based Authentication of Named Entities) protocol is an emerging innovation of DNS that provides for secure key learning with DNS and DNSSEC. Verisign Labs scientists are working with internal and community collaborators to prototype, promote, and realize secure key learning with DANE. Try out our new S/MIME object security library (libsmaug) and DANE S/MIME plug-in for Mozilla’s Thunderbird! For more information, see our recent blog post about DANE.
With the increasing deployment of DNSSEC, new, exciting uses are emerging that leverage the DNS to store and verify cryptographic keying material (such as public keys, certificates, and fingerprints). The DANE (DNS-based Authentication of Named Entities) protocol and new DNS records like TLSA are among the principal enablers of these uses. This presentation, authored by Shumon Huque and offered at the 2014 Internet2 Tech Exchange and at ICANN 52, provides an overview of DANE in the context of DNSSEC, of use cases enabled by DANE today and in the future, and available software tools, It also discusses the new open DNS API getdns (and an open source implementation of it, getdnsapi.net). With getdns, application programmers can easily use DNSSEC and DANE elements of DNS without needing to be deep experts in the DNS protocol.
Read full presentation, DANE and Application Uses of DNSSEC
In the paper, On the Characteristics of Persistent Communities of Enterprises, Verisign Labs’ Mark Teodoro and Allison Mankin and Colorado State University’s Han Zhang and Christos Papadopoulus, investigate the set of hosts that communicate with an enterprise. Some may visit the enterprise occasionally and never come back, others communicate with the enterprise very frequently. The team defines the latter as Persistent Hosts and the set as a Persistent Community. Characterizing Persistent Communities benefits enterprises in several ways, including security, network management, traffic engineering and more.
In this paper, they use 78 billion flow records collected from a sample of 84 enterprises for an entire month to explore the characteristics of Persistent Communities. First, they characterize the Persistent Community of each enterprise and find that for 90% of the enterprises, less than 21% of the hosts are persistent, yet they contribute more than 50% of the traffic. Then they correlate the Persistent Communities between multiple enterprises. They find that as the number of enterprises that a Persistent Host communicates with increases, their communication hours each day also increases. Moreover, they correlate Persistent Communities with DDoS attacks and find that while some Persistent Hosts are involved in UDP-based DDoS attacks, they only contribute a small portion of the overall attack traffic. Based on their findings, they give a simple case study of using Persistent Communities to detect business changes of enterprises, prioritize traffic and perhaps result in improved DDoS protection.
Read full paper, On the Characteristics of Persistent Communities of Enterprises
As more complex security services have been added to today’s Internet, it becomes increasingly difficult to quantify their vulnerability to compromise. The concept of “attack surface” has emerged in recent years as a measure of such vulnerabilities, however systematically quantifying the attack surfaces of networked systems remains an open challenge. In this work, Verisign Labs’ principal scientist Eric Osterweil, Verisign CSO Danny McPherson and UCLA’s Lixia Zhang propose a methodology to both quantify the attack surface and visually represent semantically different components (or resources) of such systems by identifying their dependencies. To illustrate the efficacy of their methodology, the team examines two real Internet standards (the X.509 CA verification system and DANE) as case studies. They believe this work represents a first step towards systemically modeling dependencies of (and interdependencies between) networked systems, and shows the usability benefits from leveraging existing services.
This research was presented at the Ninth Workshop on Secure Network Protocols (NPSec) and received its best paper award.
As the world rapidly runs out of available IPv4 address space, the global Internet community has shown a huge level of collaborative effort to transition the Internet to the IPv6 protocol. Events like World IPv6 Day and World IPv6 Launch Day brought together organizations working across all levels of network connectivity to raise awareness of the ever-increasing need for this change. Held on Feb. 11, 2011, World IPv6 Day marked the beginning of the changeover process. Since then, IPv6 adoption has been a closely watched and increasingly important metric.
In his latest paper, Measuring IPv6 Adoption, Verisign Labs’ principal scientist Eric Osterweil collaborates with Jacob Czyz of the University of Michigan, Mark Allman of the International Computer Science Institute, Jing Zhang and Michael Bailey from the University of Michigan, and Scott Iekel-Johnson of Arbor Networks, to provide compelling research on the IPv6 adoption rates. The team explores 12 metrics using 10 global-scale datasets to create the longest and broadest measurement of IPv6 adoption to date. Using this perspective, they find that adoption, relative to IPv4, varies by two orders of magnitude depending on the measure examined and that care must be taken when evaluating adoption metrics in isolation. Further, it is found that regional adoption is not uniform. Finally, and perhaps most surprisingly, they find that over the last three years, the nature of IPv6 utilization in terms of traffic, content, reliance on transition technology, and performance has shifted dramatically from prior findings, indicating a maturing of the protocol into production mode. The team believes IPv6’s recent growth and this changing utilization signals a true quantum leap.
The Transition to IPv6 (video)
The Tor project provides individuals with a mechanism to communicate anonymously on the Internet. Furthermore, Tor is capable of providing anonymity to servers that are configured to receive inbound connections only through Tor (more commonly called hidden services). In order to route requests to these hidden services, a namespace is used to identify the resolution requests to such services. A namespace under a non-delegated (pseudo) top-level-domain (TLD) of .onion was elected. Although the Tor system was designed to prevent .onion requests from leaking into the global DNS resolution process, numerous requests are still observed in the global DNS.
This study by Verisign Labs researchers Aziz Mohaisen and Matt Thomas presents the state of .onion requests received at the global public DNS A and J root nodes, and a complementary measurement from the DITL (day in the life of the Internet) data repository. It also presents potential explanations of the leakage and highlights trends associated with global censorship events. This research was presented at the 7th Workshop on Hot Topics in Privacy Enhancing Technologies (HotPETs 2014).
URLs often utilize query strings (i.e., key-value pairs appended to the URL path) to pass session parameters and form data. While these arguments are often benign and opaque, they sometimes contain tracking mechanisms, user demographics, and other privacy sensitive information. In isolation such URLs are not particularly problematic, but our Web 2.0 information sharing culture means these URLs are increasingly being broadcast in public forums.
Our research has examined nearly 900 million user-submitted URLs to gauge the prevalence and severity of such privacy leaks. We found troves of sensitive data, including 1.7 million email addresses, over 10 million fields of personal information, and several cases where usernames and passwords were passed in unencrypted plain-text. With this as motivation, we propose the development of a privacy-aware URL sanitization service. The goal of this service would be to transform input addresses by stripping non-essential key-value pairs and/or notifying users when sensitive data is critical to proper page rendering.