This is a rather difficult question. Normally, things in mathematics are defined by their properties, but there doesn't seem to be any truly consistent set of properties held by things mathematicians call "numbers".
My real analysis book identifies 10 properties that the set of "real numbers", ℝ, have. Some of those are really more than 1 property, so by my count it's 13 properties. But we consider the set of "natural numbers", ℕ, to be numbers, and yet 4 of those 13 properties don't hold for natural numbers -- addition doesn't have an identity or inverses, multiplication doesn't have an inverse, and they aren't continuous (which we'll get to later).
So do all "numbers" have the remaining 9 properties? Well, no. Complex numbers, ℂ, don't have a standard ordering (which are 3 properties that ℕ has), leaving 6 properties common across all things called numbers. Is that all? Not quite. Multiplication doesn't commute with Quaternions, ℍ, so that's 5 common properties. Octonions, another system under the umbrella of "numbers", are very strange in that multiplication isn't even associative. That leaves 4 common properties: Addition is (1) commutative, and (2) associative, multiplication has (3) an identity, and (4) multiplication "distributes" over addition.
Even beyond the systems I've mentioned above, there are plenty which are considered, in some sense, "numbers" (surreal numbers, hyperreal numbers, p-adic numbers, etc) which have even stranger properties. Surely, you'd think, addition must commute. I mean, that's sort of fundamental to the concept of addition, really. Could anyone seriously think of a number system in which addition didn't commute? Yes, someone did. In ordinal numbers, ω is the smallest non-finite ordinal number, and ω+1 does not equal 1+ω. You can blame Georg Cantor for that one.
The remaining, common properties (if there are any left) are very meager to hang the concept of "number" on. Even worse, there are systems which have those properties (and more!) which aren't considered numbers. For example, n by n matrices have an associative, commutative, invertible addition operation with identity, and an associative multiplication with identity that distributes over addition, and are even continuous. But they aren't considered numbers.
So, what is a number? Generally, it's whatever mathematicians feel acts like a number. The problem is that, generally, mathematicians have felt, at different times, that different properties of numbers were important, and extended their concept of numbers in the general direction of what made sense to them at the time.
To the Greeks, two sorts of "numbers" were important: positive integers, which could be used to count things, and lengths in geometry. Pure lengths didn't have the concept of a "unit", so there was no length that corresponded to 1, but unlike integers, lengths did have the concept of arbitrary proportions: it was possible to say that length a was to length b and length c was to length d, and given any three of a, b, c, or d construct the fourth. Multiplying lengths gave areas or volumes, but if a specific length was picked to be a unit length, there are ways to multiply lengths to get other lengths (since given a, b, and c, you could construct d such that a:b=c:d, then by setting one of the 4 to the unit, you could multiply using 1:a=b:ba and divide using a:1=b:b/a). Addition and subtraction of lengths is also easy. It is also easy to multiply a length by an integer, to get lengths which are twice, thrice, or a million times as large, and to divide a length by an integer, to get lengths which are twice, thrice, or a million times smaller.
The Greeks were quite adept at dealing with proportions, and had no trouble working with proportions of integers, nor did they have trouble with working with proportions of lengths. In fact, they felt that the two were quite closely related. They felt that any two lengths, a and b, say, could be divided up into some pair of integer number of equal parts, m and n, say, such that the lengths a/m and b/n would be equal (let's call it c). In their terminology, they would say that the length c "measured" both a and b, and as such a and b were commensurate ("measurable by a common standard"). Relating to proportions, this meant that a:b = m:n, where a, b are lengths and m, n are integers, which (to the Greeks) were an important property which made lengths workable (to them) as numbers. It really rocked their world when they were able to show that there were pairs of incommensurate lengths -- and that they were really easy to find, too. I'll talk about that proof again in a later post.
Although the Greeks had counting numbers, ratios of counting numbers, and lengths as numbers, all of their numbers were positive. Their concept of subtracting lengths was such that either only one of a-b or b-a existed, or the two were equivalent (i.e., either they felt that subtracting a larger from a smaller didn't make sense, or it resulted in a length equivalent to subtracting the smaller from the larger). But the idea of a negative number just didn't enter into their thinking.
Negative numbers were long not trusted as being meaningful. The only place they come up is when doing subtraction -- taking 3 apples away from a pile of 2, or shortening a 4 foot stick by 5 feet -- where the results are nonsense. Currently, arguments can be made for the utility of negative numbers by suggesting that if you have $4 and spend $6 on a book, the -$2 that results is money you owe for the book. This argument wouldn't make sense to a medieval merchant who may, quite literally, have 4 coins in pocket can't see how to spend 6 of them. Besides, the modern form of double-entry bookkeeping was developed by and for medieval merchants to keep track of quite complex transactions, and this transaction would be recorded as $6book(dr) = $4cash(cr) + $2loan(cr). Each account (book, cash, loan) has a debit (dr) and credit (cr) side, in which positive values are summed. If the (cr) side of a loan is greater than the (dr) side, you owe money. If the (dr) side of cash is greater than the (cr) side, you have cash in hand. To pay off the loan, the bookkeeper would record $2loan(dr) = $2cash(cr), which would add a $2 debit to the loan (balancing the $2 credit that started this mess), and add a $2 credit to cash (indicating that $2 in cash was paid out). No negative numbers needed, even for arbitrarily complex transactions.
What got negative numbers accepted was their utility in intermediate results: Sometimes, for some calculations, it's faster or easier to do things with subtraction, even if partway through the result is negative. If you pretend the operation makes sense, and just keep working it formally, you'll get a sensible (positive) answer in the end. After doing this enough, mathematicians came to accept the idea that there wasn't a problem working with the nonsensical negative numbers; they worked, formally, as one would expect numbers to work. And, gradually, they became accepted.
The same thing happened with so-called "imaginary numbers", who's name belies their initial unacceptability. At one time, mathematicians in Europe showed off their prowess (and got jobs) by demonstrating their ability to solve problems that no one else could do. At one point, the problems which were most in demand was solving cubic equations -- equations of the form ax³+bx²+cx+d = 0, where a, b, c, and d are integers. There can be 1, 2, or 3 solutions to this equation (in ℝ, at least), and if you have a solution it's quite easy to verify it. But finding a solution is quite a challenge in the general case. When a general solution was found, it involved square-roots of a combination of the coefficients that sometimes were negative. The square of any real number is positive, so the idea of taking the square root of a negative number is meaningless within the reals. But if you accepted the idea that the square root of this negative number might be OK, and just continued the solution through, you ended up with a real number that solved the original equation. Over a couple of hundred years, mathematicians working with imaginary and "complex" numbers in this fashion found they worked quite well, and now they are used in engineering, mathematics, physics, etc, without any problems routinely.
So the Greeks extended their idea of lengths being commensurate (like ratios of integers) to incommensurate because it worked for what they needed, and the new "numbers" had the properties of numbers they desired. Medievalists extended their idea of numbers to include negative numbers because the new "numbers" had the properties of numbers they desired and helped solve problems they were interested in better. Renaissance mathematicians extended their idea of numbers to include complex numbers because the new "numbers" had the properties of numbers they desired and helped solve problems they were interested in better. Georg Cantor extended integers to transfinite Cardinals and transfinite Ordinals to solve problems he was interested in, and they had the properties of numbers he cared about, so he called them numbers (after all, integers were indisputably numbers, and were used for both cardinality (how many) and ordinality (what order), so why shouldn't is transfinite extensions be numbers). William Hamilton's "Quaternions" extended complex numbers in much the same way that complex numbers extended real numbers (and Octonians extend quaternions the same way), so why shouldn't they be numbers? Surreal numbers (by John H Conway) provide an alternate formulation of number which subsumes reals and ordinals in one gigantic system, with addition, subtraction, multiplication, division, order, and roots, etc all working as one would expect it to.
So what's a number? Anything which generally acts as one would expect a number to act, keeping in mind that there are a lot of different ways numbers can be expected to act.
That's not to say that specific types of numbers are loosely defined. On the contrary, each type of number that's been referred to above, plus many more, has a very specific, concrete definition, properties, etc.
Next post, I'll describe the natural numbers and some of the defining properties associated with them.