I'm trying to improve the quality of my code and am using classes/objects a lot more now. It has occurred to me there are a number of things I'm undecided about as to the best way to proceed. I know there's no right or wrong way but would be interested in your thoughts on:
- If your class accepts a number, for example a page number, should you validate the number (intval, (int), is_numeric, etc) and then send it to the class—or should the class do it? My gut feel is the class might validate the page (for example, make sure it is between 1-10) but it should assume that variable is of the right type. This is so that the class represents only one thing (rather then one thing + generic validation)
- How do you handle internal errors in classes? For example, you can't return anything from a constructor. What do you do?
- I read a lot that static functions are a bad idea because it makes unit testing difficult (I don't fully understand this myself yet but I will get to that in due course). However, they seem useful to me in that I can group them together in classes (rather than functions.php) and autoload them via namespace. If you have them as procedural, you cannot do dependency injection and make the functions non-static seems pointless when they don't have any state. Thoughts?
- Given point 3, are const properties also bad for unit testing/bad practice?
I think it's OK for the class to do some value validation, such as checking that a number is non-negative or that it's within an appropriate range. I also think you're right that you shouldn't try to do type validation. It's never a good time when you're fighting the programming language, and PHP is loosely typed. Though, feel free to use type hinting liberally.
Throw an exception.
Static functions are probably OK. But static data can certainly make testing difficult.
If a class is all instance-based, then when we create a new object, we know we're getting a clean slate, and any methods we call on that new object will behave in a predicable way. But if a class uses any static data, then even when we create a brand new object, it might still carry some state from other, past objects, and as a result, it might behave differently.
Those are OK, because you don't have to worry about their value changing from one instance to another.
I just wanted to add a caveat to Jeff's answer to this. They're good and very useful providing their value is only ever used internally within the class. This is good:
$ftp = new FTPConnection('server', 'username', 'password');
Whereas this is bad:
echo 'Total price: ' . Locale::CURRENCY . $total;
The difference is that due to encapsulation, the constant should only have meaning to the class which defined it. Using class constants outside the scope of the class which defined them is bad. The easiest way to see if you're breaking this is to answer the question: "If the value of the constant changes, will my code behave differently?" if you answer yes, then your code is breaking encapsulation and depending on the value of the constant. This is bad because the author of the class should be able to alter the value of the constant without breaking anyone else's code.
Amazing, I was going to ask a very similar question. I'm redoing old procedural scripts as OOP. Today I'm converting a script that configures and sends emails. My question was if the mailer should check the validity of email addresses or should the calling script be responsible for providing valid data. I would like to hear opinions about this issue.
As I was thinking about it I realized that other classes or scripts might also want to do the same kind of validation. I implemented the validation methods in an abstract class "ValidateEmails" (instead of static methods in the Email class itself) which allows any script to use them. I also made the ValidateEmails class a parent of the Email class (with the key word extends). Is there a better way to do this? If yes, why is it better?
Thinking about it I realized that both the class and the calling script should check for errors because they are doing so at different levels. The script is most likely interacting with a user and can give feedback so that the user can correct his input. The class itself does not know who or what the script is interacting with, it only knows what kind of problem the error can cause down the line (bounced or undelivered mail, etc.). Based on the kind of expected consequence, the class can either abort the php process and notify the administrator if it is a serious error or it can just log a warning if the error does not warrant aborting the php process.
My thinking is influenced by the error reporting code I have installed in all my production websites. Some time back I discovered that a website had been partly inoperative for months on account of an error reported only to website visitors who did not bother telling us about it. Who knows how much business was lost during that time! Aborting the script and notifying the administrator solves the problem of "silent" bugs by making the administrator aware of them.
For example, suppose that in a contact form the visitor enters a bad email address, the script should give feedback asking for a correct email address. If not done, if a faulty script sends the bad email address to the email class, it should abort the php process and notify the administrator because the underlying problem is not a bad email address but a faulty script that did not detect it.
Unfortunately the problem is even more complicated than that. Suppose you create a database with valid emails and one of them becomes inoperative because the MX DNS record disappears. You don't want to abort the script, you just want to skip that particular email and notify the administrator.
I think there are two main kinds of error checking - one is validating data input by the user and the other is checking correctness of data by application components (classes, methods, etc.). In my opinion the main difference between them is that the first type of checking needs to assume that it is perfectly okay for the data to be invalid, in other words the user is allowed to enter invalid data and this doesn't cause any errors in the application - but of course the user should be informed of the error. After this first part of validation passes as successful then the rest of the system should handle the data without errors so then if a specific class is fed with invalid data it is allowed to throw an exception, halt the application, etc. In other words, a validation error at the class level should be treated as an error in the application, something to be corrected. Of course, the administrator should be notified of such errors if possible.
Personally, I tend to keep error checking of the second type to a bare minimum, otherwise I might risk cluttering all of my code with endless error checking procedures. If a class is performing some security critical task, then of course I implement as much error checking as possible but in other cases I am happy to implement only data sanitisation where needed, which is much simpler and shorter than validation. By sanitisation I mean ensuring the application doesn't crash when a class is given bad data - but it is allowed to work incorrectly.
I have found that sometimes error checking can make a task require twice as much time to code than if I could afford not to bother with errors. Especially, when a system uses or contacts third-party components, then we can expect errors at any step.
Thanks for the replies, very useful.
Avoid using static properties and methods unless you have to, statics are poor OOP practices .
This is a good answer. When it comes to validating user supplied data always, without question. However, when it comes to anticipation of errors internally such as; third party vendor service or possible issues writing to the file system it comes down to picking your battles. My approach is to identify likely points of failure and implement strategies to handle them. That could be anything from a simple response code to sending an email to a admistrator dependent on the severity of the problem. However, assuming failure on every line of code you write is an overkill unless some intensive processing is occurring. Even than though that process would likely be isolated in it's own class or method where a simple exception and proper handling would be enough.
This topic is now closed. New replies are no longer allowed.