In linear regression the minimization is of the sum of the squares of the lengths of the “black lines”; this is L² regression, and with a linear model leads to an exact solution for the “best fit” estimates of the means. L² penalizes larger errors more.
What you have described as minimizing the “length of the black lines” is L¹ regression; even with a linear model this needs to be numerically solved and leads to the “best fit” median values. L¹ penalizes all errors equally.
Of course “best fit” depends on your definition of “good” metric.