\cite{cheng_underdamped_2018} studied ULMC as a variant of HMC algorithm, which converges to $\varepsilon$ error in 2-Wasserstein distance after $\mathcal{O}\left(\frac{\sqrt{d} \kappa^2}{\varepsilon}\right)$ iterations, under the assumption that the target distribution is of the form $p^* \propto \exp (-(f(x))$, where $f$ is $L$ smooth and $m$ strongly convex, with $\kappa=L / m$ denoting the condition number.
Theorem 1 from \cite{cheng_underdamped_2018}: Let $p^{(n)}$ be the distribution of the iterate of Algorithm 2.1 after $n$ steps starting with the initial distribution $p^{(0)}(x, v)=1_{x=x^{(0)}} \cdot 1_{v=0}$. Let the initial distance to optimum satisfy $| x^{(0)}$ $x^* |_2^2 \leq \mathcal{D}^2$. If we set the step size to be
$$ \delta=\min \left{\frac{\varepsilon}{104 \kappa} \sqrt{\frac{1}{d / m+\mathcal{D}^2}}, 1\right} $$
and run Underdamped Langevin for $n$ iterations with
$$ n \geq \max \left{\frac{208 \kappa^2}{\varepsilon} \cdot \sqrt{\frac{d}{m}+\mathcal{D}^2}, 2 \kappa\right} \cdot \log \left(\frac{24\left(\frac{d}{m}+\mathcal{D}^2\right)}{\varepsilon}\right) $$
then we have the guarantee that
$$ W_2\left(p^{(n)}, p^*\right) \leq \varepsilon $$