Looks like there are multiple ways to understand this... let me present the usual way (at least for me) of solving elementary examples of conditional probabilities:
Let [math]S[/math] denote the event that father scores, and let [math]R[/math] denote the event that the son reports scoring. The given probabilities are [math]P(S)=0.6[/math] and [math]P(R | S)=0.8[/math], and from the formulation of the problem it follows that [math]P( \bar R | \bar S)=0.8[/math] and [math]P(R | \bar S)=0.2[/math] (where [math]\bar A[/math] denotes the complement of [math]A[/math]).
Now we can use Bayes formula + law of total probability to compute what we want:
[math]P(S | R)=\frac{P(S \cap R)}{P®}=[/math]
[math]=\frac{P(R|S) P(S)}{P(R|S)P(S)+P(R|\bar S)P(\bar S)}=\frac{0.8\cdot0.6}{0.8\cdot0.6+0.2\cdot 0.4}\approx 0.857 [/math].
Notice that the probability goes up from 0.6 because we add information: this is the main idea of Bayesian thinking. A priori (before information from the son) we have more uncertainty about scoring, and afterwards, a posteriori, we are more certain.
Cheers,
Tuomas