The read() call is blocking because, when called with no argument, read() will read from the stream in question until it encounters EOF.
If your use case is as simple as your example code, a cheap workaround is to defer reading from p.stdout until after you close the connection:
cmd = ['ssh', '-t', '-t', 'deploy@pdb0']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
outstr = p.stdout.read()
You'll then have to parse outstr to separate the output of the different comamnds. (Looking for occurrences of the remote shell prompt is probably the most straightforward way to do that.)
If you need to read the complete output of one command before sending another, you have several problems. First this can block:
p.stdin.write('pwd\n')
st1 = p.stdout.read()
because the command you write to p.stdin might be buffered. You need to flush the command before looking for output:
p.stdin.write('pwd\n')
p.stdin.flush()
st1 = p.stdout.read()
The read() call will still block, though. What you want to do is call read() with a specified buffer size and read the output in chunks until you encounter the remote shell prompt again. But even then you'll still need to use select to check the status of p.stdout to make sure you don't block.
There's a library called pexpect that implements that logic for you. It'll be much easier to use that (or, even better, pxssh, which specializes pexpect for use over ssh connections), as getting everything right is rather hairy, and different OSes behave somewhat differently in edge cases. (Take a look at pexpect.spawn.read_nonblocking() for an example of how messy it can be.)
Even cleaner, though, would be to use paramiko, which provides a higher level abstraction to doing things over ssh connections. In particular, look at the example usage of the paramiko.client.SSHClient class.