See also my Google citations.


Towards a Model of DNS Client Behavior

Kyle Schomp, Michael Rabinovich, and Mark Allman
In Proceedings of the 2016 Passive and Active Measurement Conference (PAM ’16), Heraklion, Crete, Greece, March 2016

The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable hostnames into the IP addresses the network uses to route traffic. Yet, the DNS behavior of individual clients is not well understood. In this paper, we present a characterization of DNS clients with an eye towards developing an analytical model of client interaction with the larger DNS ecosystem. While this is initial work and we do not arrive at a DNS workload model, we highlight a variety of behaviors and characteristics that enhance our mental models of how DNS operates and move us towards an analytical model of client-side DNS operation.

Is The Web HTTP/2 Yet?

Matteo Varvello, Kyle Schomp, David Naylor, Jeremy Blackburn, Alessandro Finamore, and Konstantina Papagiannaki
In Proceedings of the 2016 Passive and Active Measurement Conference (PAM ’16), Heraklion, Crete, Greece, March 2016

Version 2 of the Hypertext Transfer Protocol (HTTP/2) was finalized in May 2015 as RFC 7540. It addresses well-known problems with HTTP/1.1 (e.g., head of line blocking and redundant headers) and introduces new features (e.g., server push and content priority). Though HTTP/2 is designed to be the future of the web, it remains unclear whether the web will---or should---hop on board. To shed light on this question, we built a measurement platform that monitors HTTP/2 adoption and performance across the Alexa top 1 million websites on a daily basis. Our system is live and up-to-date results can be viewed at isthewebhttp2yet.com. In this paper, we report findings from an 11 month measurement campaign (November 2014 - October 2015). As of October 2015, we find 68,000 websites reporting HTTP/2 support, of which about 10,000 actually serve content with it. Unsurprisingly, popular sites are quicker to adopt HTTP/2 and 31% of the Alexa top 100 already support it. For the most part, websites do not change as they move from HTTP/1.1 to HTTP/2; current web development practices like inlining and domain sharding are still present. Contrary to previous results, we find that these practices make HTTP/2 more resilient to losses and jitter. In all, we find that 80% of websites supporting HTTP/2 experience a decrease in page load time compared with HTTP/1.1 and the decrease grows in mobile networks.

multi-context TLS (mcTLS): Enabling Secure In-Network Functionality in TLS

David Naylor, Kyle Schomp, Matteo Varvello, Ilias Leontiadis, Jeremy Blackburn, Diego Lopez, Konstantina Papagiannaki, Pablo Rodriguez, and Peter Steenkiste
In Proceedings of the 2015 ACM SIGCOMM Conference (SIGCOMM ’15), London, England, August 2015

Transport Layer Security (TLS), is the de facto protocol supporting secure HTTP (HTTPS), and is being discussed as the default transport protocol for HTTP2.0. It has seen wide adoption and is currently carrying a significant fraction of the overall HTTP traffic (Facebook, Google and Twitter use it by default). However, TLS makes the fundamental assumption that all functionality resides solely at the endpoints, and is thus unable to utilize the many in-network services that optimize network resource usage, improve user experience, and protect clients and servers from security threats. Re-introducing such in-network functionality into secure TLS sessions today is done through hacks, in many cases weakening overall security.

In this paper we introduce multi-context TLS (mcTLS) which enhances TLS by allowing middleboxes to be fully supported participants in TLS sessions. mcTLS breaks the "all-or-nothing" security model by allowing endpoints and content providers to explicitly introduce middleboxes in secure end-to-end sessions, while deciding whether they should have read or write access, and to which specific parts of the content. mcTLS enables transparency and control for both clients and servers.

We evaluate a prototype mcTLS implementation in both controlled and "live" experiments, showing that the benefits offered have minimal overhead.More importantly, we show that mcTLS can be incrementally deployed and requires small changes to clients, servers, and middleboxes, for a large number of use cases.

DNS Resolvers Considered Harmful (slides)

Kyle Schomp, Mark Allman, and Michael Rabinovich
In Proceedings of the 2014 ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets ’14), Los Angeles, CA, October 2014

The Domain Name System (DNS) is a critical component of the Internet infrastructure that has many security vulnerabilities. In particular, shared DNS resolvers are a notorious security weak spot in the system. We propose an unorthodox approach for tackling vulnerabilities in shared DNS resolvers: removing shared DNS resolvers entirely and leaving recursive resolution to the clients. We show that the two primary costs of this approach—loss of performance and an increase in system load—are modest and therefore conclude that this approach is beneficial for strengthening the DNS by reducing the attack surface.

Assessing DNS Vulnerability to Record Injection (slides) (data)

Kyle Schomp, Tom Callahan, Michael Rabinovich, and Mark Allman
In Proceedings of the 2014 Passive and Active Measurement Conference (PAM ’14), Los Angeles, CA, March 2014

The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable names to IP addresses. Injecting fraudulent mappings allows an attacker to divert users from intended destinations to those of an attacker's choosing. In this paper, we measure the Internet's vulnerability to DNS record injection attacks-including a new attack we uncover. We find that record injection vulnerabilities are fairly common-even years after some of them were first uncovered.

On Measuring the Client-Side DNS Infrastructure (slides) (data)

Kyle Schomp, Tom Callahan, Michael Rabinovich, and Mark Allman
In Proceedings of the 2013 Internet Measurement Conference (IMC ’13), Barcelona, Spain, October 2013

The Domain Name System (DNS) is a critical component of the Internet infrastructure. It allows users to interact with Web sites using human-readable names and provides a foundation for transparent client request distribution among servers in Web platforms, such as content delivery networks. In this paper, we present methodologies for efficiently discovering the complex client-side DNS infrastructure. We further develop measurement techniques for isolating the behavior of the distinct actors in the infrastructure. Using these strategies, we study various aspects of the client-side DNS infrastructure and its behavior with respect to caching, both in aggregate and separately for different actors.

Complexity and Security of the Domain Name System

Kyle Schomp
PhD Dissertation, Case Western Reserve University, Cleveland, OH, 2016

The Domain Name System (DNS) provides mapping of meaningful names to arbitrary data for applications and services on the Internet. Since its original design, the system has grown in complexity and our understanding of the system has lagged behind. In this dissertation, we perform measurement studies of the DNS infrastructure demonstrating the complexity of the system and showing that different parts of the infrastructure exhibit varying behaviors, some being violations of the DNS specification. The DNS also has known weaknesses to attack and we reinforce this by uncovering a new vulnerability against one component of the system. As a result, understanding and maintaining the DNS is increasingly hard. In response to these issues, we propose a modification to the DNS that simplifies the resolution path and reduces the attack surface. We observe that the potential costs of this modification can be managed and discuss ways that the cost may be mitigated.

Dynamic TCP Proxies: Coping with Mobility and Disadvantaged Hosts in MANETs

Kyle Schomp
Master Thesis, Case Western Reserve University, Cleveland, OH, 2010

TCP proxies have been introduced as a method to improve throughput and reduce congestion in mobile ad hoc networks. Proxies split the path into several shorter paths which have higher throughput due to reduced packet loss and round trip time. As a side effect, congestion is reduced because fewer link layer retransmissions occur. In current protocols, proxies are assigned at the start of the transfer and must be used for the duration. Due to mobility and congestion change, pinned proxies can actually reduce throughput. In this thesis, we present a second version of the DTCP protocol which includes the ability to switch proxies in the middle of a transfer. We demonstrate in the Network Simulator version 2 that the new protocol performs better than other related protocols in simulated mobile ad hoc networks with varying levels of mobility and congestion.