Programs That Avoid the Halting Problem

algorithmscomputer science

I was just reading another explanation of the halting problem, and it got me thinking all the problems I've seen that are given as examples involve infinite sequences. But I never use infinite sequences in my programs – they take too long. All the real world applications have lower and upper bounds. Even reals aren't truly reals – they are approximations stored as 32/64 bits etc.

So the question is, is there a subset of programs that can be determined if they halt? Is it good enough for most programs. Can I build a set of language constructs that I can determine the 'haltability' of a program. I'm sure this has been studied somewhere before so any pointers would be appreciated. The language wouldn't be turing complete, but is there such a thing as nearly turing complete which is good enough?

Naturally enough such a construct would have to exclude recursion and unbounded while loops, but I can write a program without those easily enough.

Reading from standard input as an example would have to be bounded, but that's easy enough – I'll limit my input to 10,000,000 characters etc, depending on the problem domain.

tia

[Update]

After reading the comments and answers perhaps I should restate my question.

For a given program in which all inputs are bounded can you determine if the program halts. If so what are the constraints of the language and what are the limits of the input set. The maximal set of these constructs would determine a language which can be deduced to halt or not. Is there some study that's been done on this?

[Update 2]

here's the answer, it's yes, way back in 1967
from http://www.isp.uni-luebeck.de/kps07/files/papers/kirner.pdf

That the halting problem can be at least theoretically solved for finite-state
systems has been already argued by Minsky in 1967 [4]:
“…any finite-state machine, if left completely to itself, will fall eventually into a
perfectly periodic repetitive pattern. The duration of this repeating pattern cannot
exceed the number of internal states of the machine…”

(and so if you stick to finite turing machines then you can build an oracle)

Best Answer

The issue isn't on the input (obviously, with unbounded input, you can have unbounded running time just to read the input), it is on the number of internal states.

When the number of internal state is bounded, theoretically you can solve the halting problem in all cases (just emulate it until you reach halting or the repetition of a state), when it isn't, there are cases where it isn't solvable. But even if the number of internal states is in practice bounded, it is also so huge that the methods relying of the boundedness of the number of internal states are useless to prove the termination of any but the most trivial programs.

There are more practical ways to check the termination of programs. For instance, express them in a programming language which hasn't recursion nor goto and whose looping structures have all a bound on the number of iterations which has to be specified on entry of the loop. (Note that the bound hasn't to be really related to the effective number of iterations, a standard way to prove the termination of a loop is to have a function which you prove is strictly decreasing from one iteration to the other and your entry condition ensure is positive, you could put the first evaluation as your bound).

Related Topic