8

For someone used to imperative programming, it's sometimes hard to write efficient code in functional languages without using arrays/vectors. However, it seems there's always a smart way of doing it. Sorting, for example, can be done in O(n*log(n)) time both in imperative and declarative programming languages, and the absence of swapping operations is not a real problem.

Consider a functional programming language that has no arrays or any data structure that can access an arbitrary element in constant time. Take a subset of SML or Haskell without arrays, for example.

Of course, every computable problem is solvable by a program written in such language. But I'm wondering if there is any problem that intrinsically can't be solved efficiently outside the imperative world. By "efficiently", I mean with at most the same time complexity of the best known imperative algorithm to solve the problem.

For example, can matrix multiplication be computed efficiently using only lists in SML or Haskell?

14
  • 3
    I'm pretty confident that you can't make a prime sieve without a mutable array as efficiently as with. Commented Jun 15, 2013 at 21:38
  • all programming languages are just an abstraction above assembly/machine language Commented Jun 15, 2013 at 21:52
  • Are you asking for a problem that requires random access for an optimal solution? How are strings represented in your "array-less, no random access" language? Commented Jun 15, 2013 at 22:05
  • 2
    @jxh I assume Gabriel was thinking in terms of complete programs with input and output (otherwise the question would be trivial since Gabriel already knows that he can't do random access in O(1) since he specifically excluded that). And if you count input and output, then it would have to be O(n) in both cases since you need to read all the n chars before you can find the middle of them. The question is whether there is any problem where the complete program to solve that problem will be asymptotically slower because you can't do random access in O(1) time. Commented Jun 15, 2013 at 22:39
  • 3
    @sepp2k Note that you can always keep O(1) access to the middle of a list just by taking its "derivative" (i.e. transforming it to a zipper). You can actually store any number of O(1) "pointers" to the interior of a list this way as long as you take the "derivative" once for each pointer you want to keep track of. So O(1) access to the middle of any collection does not rule out the use of functional data structures. Commented Jun 16, 2013 at 0:33

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.