Algorithm study encompasses concepts found both within computer science and beyond. For instance, some of the major shifts in Internet routing standards can be viewed as debates of the deficiencies of one shortest-path algorithm and the relative advantages of another, while at the same time, the basic notions used by biologists to express similarities among genes and genomes have algorithmic definitions. The subject of algorithms is a powerful lens through which to view the field of computer science in general, utilizing concepts in applied math such as combinatorics and graph theory. The algorithmic enterprise consists of two fundamental components: the task of getting to the mathematically clean core of a problem, and then the task of identifying the appropriate algorithm design techniques, based on the structure of the problem.
Dynamical systems describe the time-varying processes fundamental to many scientific theories. Classical continuous-time systems are described by ordinary or partial differential equations, whereas discrete-time systems are modeled using difference equations or cellular automata. A typical goal in dynamical systems is to predict the long term behaviour of a system as a result of perturbations or small changes to initial parameters. One of the most compelling aspects of the subject is the opportunity to study a question using a diverse set of methods such as mathematical modeling or computational and experimental investigation combined with theoretical analysis. This diversity is reflected at the Center for Applied Mathematics where faculty and students study a broad range of dynamical systems ranging from mechanical systems or fluid dynamics to questions in mathematical biology or stochastic systems in economics.
Mathematical Finance is the field of mathematics that studies financial markets. It first started in the 1970's with the seminal paper of Black, Scholes and Merton on option pricing and has quickly evolved into a broad field which now covers such topics as credit risk, portfolio optimization, liquidity risk and arbitrage-free pricing. Whereas stocks and bonds are the securities which are the most traded on financial markets, financial engineers spend most of their time on derivative products such as options, swaps and CDO's. For example, a put option is a financial contract whose value is a function of the value of the underlying stock. In the case of the put option, the payoff is positive when the stock falls below a given value. By modelling the dynamics of the stock as a stochastic process, one can derive the fair value of the option. For instance when the stock is modelled as a geometric Brownian motion the result is the famous Black-Scholes formula! Despite its large popularity, the Black-Scholes model nevertheless fails to incorporate market characteristics of illiquidity, transactions costs and stochastic volatility. The use of backward stochastic differential equations, viscosity solutions and Malliavin calculus are some of the most recent tools that have been used to tackle these specific problems.
Credit risk is another important topic for which much attention has been given recently due to the exponential growth of the credit derivatives market. The valuation of Collateralised Debt Obligations (or CDO's) is probably the most challenging problem in credit risk research to date. To finance their activities firms issue bonds. Coupons are paid by the issuer of the bond unless the firm defaults. The simplest form of CDO's is a portfolio of such credit-sensitive bonds for which the purchase is funded by different classes of investors. The complexity of the problem comes from the dependence of the defaults of the firms and the inner structure of the CDO. Doubly stochastic random times and copula models are the essential mathematical tools in this research area.
At a more pratical level, financial engineers use Monte Carlo methods and statistical inference to calibrate their model to real data and numerical methods to implement them.
From the perspective of an applied mathematician, fluid mechanics encompasses a wealth of interesting problems. Fluid motion is governed by the Navier-Stokes equations; the apparent simplicity of these differential equations belies the range of fascinating phenomena that emerge in the motion of liquids and gases. Understanding problems in such disparate application areas as groundwater hydrology, combustion mechanics, ocean mixing, animal swimming or flight, or surface tension driven motion, hinges on a deeper exploration of fluid mechanics. Attempts to understand fluid motion from a theoretical perspective lead to mathematical questions involving numerical analysis, dynamical systems, stochastic processes and computational methods. This is classically rich territory for the applied mathematician and CAM offers opportunities to work in many areas of fluids with researchers whose interests range throughout the engineering disciplines.
Mathematical biology is, quite simply, the use of mathematical tools and analysis to better understand biology. By representing biological systems in mathematical terms both explanatory and predictive power can be gained. And nearly all aspects of math are used in approaching every area of biology. For example, differential equations can model how disease spreads through populations, cellular automata can be used to understand spotted patterns on the pelts of animals, probability theory and Markov chains can be used to understand the evolution of genetic sequences, and fractal principles can be used to understand the relationship between the mass of an organism and its metabolic rate. Numerical simulations and computational models are a key part of mathematical biology as well.
Mathematical Physics is defined as the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of theoretical physics. Important fields of research in mathematical physics include: functional analysis/quantum physics, geometry/general relativity and combinatorics/probability theory/statistical physics.
Numerical analysis is the study of algorithms written to solve problems in continuous mathematics. These algorithms do not seek exact answers, which are typically impossible to obtain in practice. Instead, much of numerical analysis is concerned with finding approximate solutions while ensuring reasonable bounds on error.
Some of the most basic uses of numerical analysis include approximating definite integrals, solving ordinary differential equations with specified initial conditions, minimizing or maximizing a given function, and solving nonlinear systems of equations. Natural applications of numerical analysis occur within all fields of engineering and physical sciences, but recently elements of scientific computation have proven useful in the life sciences and even the arts. Specific examples include the use of optimization in portfolio management, numerical linear algebra in quantitative psychology, and stochastic differential equations and Markov chains in computer simulation of living cells.
The goal of optimization is to find solutions to minimization problems subject to a given set of constraints. While the traditional use of optimization is to simplify problems within the physical sciences and engineering, optimization can also serve as an objective method of balancing parameters in many other multidisciplinary problems. The basic challenge in Optimization is to compute a good solution in a reasonable amount of time, typically by exploiting any special structure that a particular problem might have. On the practical side, one must always consider how well a solution obtained from optimization solves the original problem at hand. These issues generate many questions related to other fields like computer science, operations research and pure mathematics.
There are main two branches of optimization: continuous and discrete. Both are huge areas, and both tend to be very popular among CAM students.
Probability and Stochastic Processes
Probability is the study of making estimates, predictions, and decisions about systems containing uncertainty. It may range from the highly theoretical, such as real analysis and measure theory on finite-measure spaces, to the highly applied, such as statistics for large sets of data. The field of stochastic processes lies somewhere in the middle: It's the study of sequences of related, uncertain events, and often involves modeling a complicated real-life system (for example, the weather on consecutive days) with a simpler representation (for example, a Markov chain) that is more amenable to known theoretical methods.
The mathematical tools used in probability and stochastic processes vary widely, depending on the particular area of study. They may include results and techniques from real analysis, such as the Central Limit Theorem and Fubini's Theorem; methods from statistics, such as Maximum Likelihood Estimation and hypothesis testing; or computational methods such as simulation and Markov Chain Monte Carlo. Some common models from stochastic processes include random walks, martingales, and Brownian motion.
Perhaps the most popular applications of probability and stochastic processes are:
- Genetics: using currently existing genetic diversity in a population to understand that population's history of growth and migration.
- Financial Mathematics: understanding the behavior of the stock market in terms of Brownian motion.
- Network Engineering: using principles of queuing theory to design effective computer networks.
Every day, vast amounts of information are transmitted around the globe, often instantaneously, using a variety of wired and wireless technologies. The ideas for most of these technologies and the theory behind them are developed within the Communications area in Electrical Engineering. This research involves a lot of mathematics, particularly from the areas of stochastic processes, probability, statistics, real analysis, and combinatorics. Some active research at Cornell in the field of communications includes: information theory, which is concerned with finding out how much information is actually in a given data set and the fundamental limits of data compression and transmission; sensor networks, which usuallyconsist of data-collecting networks with minimal infrastructure (such as a sensors to measure environmental temperatures); wireless networks; and signal and image processing.