We approach the knapsack problem from a statistical learning perspective. We consider a stochastic setting with uncertainty about the description of the problem instances. As a consequence, uncertainty about the optimal solution arises. We present a characterization of different classes of knapsack problem instances based on their sensitivity to noise variations. We do so by calculating the informativeness as measured by the approximation set coding (ASC) principle. We also demonstrate experimentally that, depending on the problem instance class, the ability to reliably localize good knapsack solution sets may or may not be a requirement for good generalization performance. Furthermore, we present a parametrization of knapsack solutions based on the concept of a knapsack core. We show that this parametrization allows to regularize the model complexity of the knapsack learning problem. Algorithms based on the core concept may benefit from this parametrization to achieve better generalization performance at reduced running times. Finally, we consider a randomized approximation scheme for the counting knapsack problem proposed by Dyer. We employ the ASC principle to determine the maximally informative approximation ratio.