Over the last years, researchers and engineers have developed numerous abstractions and programming models that make developing parallel programs easier, safer, and more efficient. Despite the advances made in parallel programming, however, the corresponding compilers still remain largely ignorant of the parallelism exhibited by the program execution. In particular, current compilers do not have any knowledge about what tasks are scheduled with what ordering---even though many higher-level parallel programming models contain a wealth of task-order information that can be exploited by the compiler. This thesis presents a schedule analysis that can extract task-ordering information from real-world programs. By analyzing potential orderings between tasks, the compiler can decide whether accesses to shared memory by different tasks may interfere and use this information to guide its optimization efforts. The schedule analysis can be combined with standard program analyses such as points-to and escape analysis to improve the precision of known optimizations as well as enable new optimizations.