The problem at hand is to solve for $x$ the equation $g^x\bmod p=v$, with $x$ in a small interval $[a,a+n]$, for some given $a$ (perhaps $a=0$), small $n$, $g$, $p$, and $v$; with $p$ a 3072-bit prime, here such that $q=(p-1)/2$ is prime, $g^q\bmod p=1$; specifically $g=2$ and $p=2^{3072}-2^{3008}-1+2^{64}(\lfloor2^{2942}\pi\rfloor+1690314)$.
Algorithms that do not make use of $x$ being in a small interval are doomed, because the current academic record in this case is 795-bit $p$ and we are ways above that. This rules out Number Field Sieve, Pollard's rho, other completely generic methods for the Discrete Logarithm Problem; and Pohlig-Hellman, because $p-1$ has a large factor, the 3071-bit $q$.
For small $n$ we can compute $v_0=g^a\bmod p$ then until $v_i=v$ compute $v_{i+1}=g\;\!v_i\bmod p$ for incremental $i$. Now $x=a+i$. Expected cost is dominated by about $n/2$ modular multiplications. This is fast for $n$ up to millions ($2^{20+}$) on a standard CPU.
The above takes advantage of $g=2$ because doubling modulo $p$ is faster than multiplication by a general $g$ modulo $p$. But there's a better way to do so: compute $w=2^{3071}\bmod p$, $w_0=v^{-1}\;\!2^{a+3071}\bmod p$, then until $w_i$ is a power of two compute $w_{i+1}=w\;\!w_i\bmod p$. Now $x=a+3071\;\!i-\log_2(w_i)$. Expected cost is dominated by about $n/2/\log_2 p$ modular multiplications. This is fast for $n$ up to billions ($2^{30+}$) on a standard CPU.
For larger $n$ or faster operation, Baby-Step/Giant-Step and Pollard's kangaroo are better algorithms, because they allow taking advantage of the small interval. Both require in the order of $2\sqrt n$ modular multiplications modulo $p$.
Baby-Step/Giant-Step is guaranteed to find the solution in some fixed number of steps, and for $g=2$ half of its modular multiplications can be doublings. But it requires a lot of memory and memory accesses, in the order of $\sqrt n\log_2 p$ bits of memory (plus overhead for fast search) in it's standard form. We can reduce that to $\sqrt n\log_2 n$ by storing hashes or lower-order bits of the values to search. We can distribute the necessary memory on multiple independent CPUs, or the work, but not both. It's practicable for $n$ up to $2^{50+}$.
Pollard's kangaroo is randomized, but uses little memory, and is easy to parallelize on multiple CPUs. This makes it practicable for $n$ to $2^{80+}$.