Poor teaching

The book I'm working through to try and learn C++ has had about 11 example programs so far, every one of which I've totally ignored to do something involving the just-learned principles myself. In my own little flights of fancy several mistakes have cropped up, and having to go and check them, rectify them and improve the code has taught me quite a lot about the way the language works.

For once though, I thought I'd follow an example program just as I was supposed to, to see if it was worth listening to the book rather than disappearing on my little flights of fancy. The program is designed to work out interest repayments on a loan. It runs:

#include <iostream>
#include <cmath>
using namespace std;

int main()
{

    double start;               //Starting rate
    double intrate;             //Interest rate
    double payperyear;          //Whatever
    double numyears;
    double payment;
    double numer, denom;       //Temp working variables
    double b, e;                //Stuff for maths

    cout << "Enter starting value: ";
    cin >> start;

    cout << "Enter interest rate: ";
    cin >> intrate;

    cout << "And the number of payments per year is: ";
    cin >> payperyear;

    cout << "For this many years: ";
    cin >> numyears;

    numer = intrate * (start/payperyear);
    e = -(payperyear * numyears);
    b = (intrate / payperyear) + 1;

    denom = 1 - pow(b,e);

    payment = numer/denom;

    cout << "Payment is: " << payment;

    return 0;

Having typed that out as the book told me to, I sat back and ran it. It ran perfectly, unsurprisingly, as it has been created perfectly by the authors. What did I learn by 'creating' it? Absolutely nothing, as it was all provided for me.

For example - the symbol "==" is a perfectly valid operator in C++. If I were to take a segment of the code above, and, assuming I was creating it myself typed in

  numer == intrate * (start/payperyear);
    e == -(payperyear * numyears);
    b == (intrate / payperyear) + 1;

Instead of the correct

  numer = intrate * (start/payperyear);
    e = -(payperyear * numyears);
    b = (intrate / payperyear) + 1;

The code itself would compile and run fine. No errors or warnings are generated, because the language I'm using is perfectly valid - but it doesn't make any sense and wouldn't output the correct value when the code was finished.

Now I *made* that mistake the last time I wrote something off my own back, and it took me a while to find and fix it, but in doing that I learned the distinction well. These examples, presented perfectly and requiring no interaction, or thought, are absolutely useless.

Gripe over.