If [latex]c-\delta[/latex] is not an upper bound of [latex]S[/latex] there would be an element [latex]x\in S[/latex] such that [latex]c-\delta<x[/latex]. Now [latex]x\in S[/latex] implies [latex]x\leqslant c[/latex] (because [latex]c[/latex] is an upper bound) and [latex]f(x)\leqslant u[/latex] (definition of the set [latex]S[/latex]). But that part of the proof has shown that [latex]f(x)>u[/latex] for all [latex]x\in(c-\delta,\,c+\delta)[/latex], in particular for all [latex]x\in(c-\delta,\,c][/latex]. Hence [latex]c-\delta[/latex] must be an upper bound.
Yeah, some details have been skipped here. The assumption here is that [latex]f(c )<u<f(b)[/latex]. Hence [latex]c<b[/latex] so [latex]\delta[/latex] can be taken small enough such that [latex]c+\delta<b[/latex].
For all [latex]x\in S[/latex], [latex]a\leqslant x\leqslant b[/latex]. So [latex]b[/latex] is an upper bound of [latex]S[/latex]; since [latex]c[/latex] is the least upper bound, [latex]c\leqslant b[/latex]. Hence [latex]a\leqslant x\leqslant c\leqslant b[/latex], i.e. [latex]a\leqslant c\leqslant b[/latex]. The strictness of the inequality signs follows from the fact that [latex]u=f( c)[/latex] is strictly between [latex]f(a)[/latex] and [latex]f(b)[/latex]. (NB: It is enough to have [latex]a\leqslant c\leqslant b[/latex] for the proof to proceed. When the proof is complete, it will follow that [latex]a<c<b[/latex].)