; The notation of point estimator commonly has a ^. Just read a little bit about the concept of 'Ancillary statistics' and 'Basu's Theorem' which greatly simplifies the math. I am having trouble understanding how to compute $\operatorname E[\bar{X}\mid X_{(n)}]$ related to the following premise. But looking at the proof, $(X_{(1)}, X_{(n)})$ is already a sufficient statistic for $(\theta_1, \theta_2)$, too, right? Lets say $g(X_{(1)}, X_{(n)})$ is a function of your statistic. Lets say $g(X_{(1)}, X_{(n)})$ is a function of your statistic. To learn more, see our tips on writing great answers. In the lecture we learned that $(X_{(1)}, X_{(n)})$ sufficient for $(a,b)$ if the $X_i$ are uniform on $(a,b)$ (where $X_{(1)}<0$. Naturally, I would like to use the Lehmann-Scheff theorem that says: If V is a complete, sufficient statistic for and E [ g ( V)] = h ( ) holds. Which seems sensible. Browse other questions tagged, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, $$\operatorname E[2\bar{X}\mid X_{(n)}]$$. E\left[X_1\mid X_{(n)}=t\right]&=E\left[X_1\mid X_1=t\right]\cdot\frac1n+E\left[X_1\mid X_1