Barry Cooper asks whether information can be increased through computation, pointing out that Turing computation does not create anything not already in the initial data (Nature 482, 465; 2012). If we are limited to Turing machines, then I believe the answer to his question is no. But if we enhance them then information can accumulate.

Turing machines are designed to model functions, not ongoing computations involving additional input over time. But if we enhance Turing machines by giving them a persistent memory and allowing them to alter their input by interacting with their environment, then information can increase.

Interactive machines engage in input and output during computation, which is closer to how computers are used in practice than in the Turing machine framework: examples include operating systems, interactive agents in artificial intelligence, and solutions to some problems in control theory.

This observation is largely unappreciated in theoretical computer science, but a handful of researchers are exploring this part of 'super-Turing' space.