The paper aims at tight upper bounds on the size of pattern classification circuits that can be used for a priori parameter settings in a machine learning context. The upper bounds relate the circuit size S(C) to n(L) : = inverted right perpendicularlog(2) m(L)inverted left perpendicular, where m(L) is the number of training samples. In particular, we show that there exist unbounded fan-in threshold circuits with less than (a) S-cc(R) :=2 . root 2(nL)+3 gates for unbounded depth, (b) S-cc(L) := 34.8 . root 2(nL) + 14 . n(L) - 11 . log(2) n(L) + 2 gates for small bounded depth, where in both cases all m(L) samples are classified correctly. We note that the upper bounds do not depend on the length n of input (sample) vectors. Since n(L) <<n in real-world problem settings, the upper bounds return values that are suitable for practical applications. We provide experimental evidence that the circuit size estimations work well on a number of pattern classification tasks. As a result, we formulate the conjecture that inverted right perpendicular1.25.S(cc)Rinverted left perpendicular or inverted right perpendicular0.07.S(cc)Linverted left perpendicular gates are sufficient to achieve a high generalization rate of bounded-depth classification circuits.