开发者

Avoiding null pointers and keeping polymorphism

In my code I just noticed that I quite often need to check for nullptr, even though nullptr should not be possible (according to specified requirements).

However, nullptr might still occur since other people might send a nullptr believing this is ok (unfortunately not everyone reads/writes specification), and this defect cannot be caught unless the problem is triggered in run-time during testing (and high test coverage is expensive). Thus it might lead to a lot of post-release bugs reported by customers.

e.g.

class data
{
     virtual void foo() = 0;
};

class data_a : public data
{
public:
     virtual  void foo(){}
};

class data_b : public data
{
public:
     virtual void foo(){}
};

void foo(const std::shared_ptr<data>& data)
{
    if(data == nullptr) // good idea to check before use, performance and forgetting check might be a problem?
        return;
    data->foo();
}

Usually I would simply use value-types and pass by reference and copy. However, in some cases I need polymorphism which requires pointers or references.

So I have started to use the following "compile time polymorphism".

class data_a
{
public:
     void foo(){}
private:
     struct implementation;
     std::shared_ptr<implementation> impl_; // pimpl-idiom, cheap shallow copy
};

class data_b
{
public:
     void foo(){}
private:
     struct implementation;
     std::shared_ptr<implementation> impl_; // pimpl-idiom, cheap shallow copy
};

class data
{
public:
     data(const data_a& x) : data_(x){} // implicit conversion
     data(const data_b& x) : data_(x){} // implicit conversion
     void foo()
     {
          boost::apply(foo_visitor(), data_);
     }
private:
     struct foo_visitor : public boost::static_visitor<void>
     {
          template<typename T>
          void operator()(T& x){ x.foo(); }       
     };

     boost::variant<data_a, data_b> data_;
}

void foo(const data& data)
{
   data.foo();
}

Does anyone else think this is a good idea, when practical? Or am I missing something? Are there any potential problems with this practice?

EDIT:

The "problem" with using references is you cannot move ownership of a reference (e.g. returning an object).

data& create_data() { data_a temp; return temp; } // ouch... cannot return temp;

The problem with rvalue references (polymorphism does work with rvalue references?) then becomes that you cannot share ownership.

data&& create_data(开发者_运维问答) { return std::move(my_data_); } // bye bye data        

A "safe" pointer based on shared_ptr does sound like a good idea, but I would still like a solution where the non-nullness is enforced at compile time, maybe not possible.


You can always use the null object pattern and references only. You can't pass (well you sort of can but that is the user's error) a null reference.


I personally prefer to encode the null possibility in the type, and thus use boost::optional.

  • Create a data_holder class, that always owns a data (but allows polymorphism)
  • Define your interfaces in terms of data_holder (non-null) or boost::optional<data_holdder>

This way it is perfectly clear whether or not it may be null.

Now, the hard part is to get data_holder never to hold on a null pointer. If you define it with a constructor of the form data_holder(data*), then the constructor may throw.

On the other hand, it could simply take some arguments, and defer the actual construction to a Factory (using the Virtual Constructor Idiom). You still check the result of the factory (and throws if necessary), but you only have one place to check (the factory) rather than every single point of construction.

You may want to check boost::make_shared too, to see the argument forwarding in action. If you have C++0x, then you can argument forwarding efficiently and get:

template <typename Derived>
data_holder(): impl(new Derived()) {}

// Other constructors for forwarding

Don't forget to declare the default constructor (non-template) as private (and not define it) to avoid an accidental call.


A non-null pointer is a widely known concept, and used in safe subsets of C, for instance. Yes, it can be advantageous.

And, you should use a smart pointer for this. Depending on your use case, you may want to start with something similar to boost::shared_ptr or tr1::unique_ptr. But instead of only asserting the non-null-ness on operator*, operator-> and get(), throw an exception.

ETA: forget this. Although I think this general approach is helpful (no undefined behavior, etc. etc.), it does not provide you with compile-time checks for non-null-ness, which you probably want. For this you would have use such non-null pointers pervasively throughout your code, and without language support this would still be leaky.


Generally when you are using virtual functions as you have outlined above it is because you don't want to know about all the classes that implement the desired interface. In your example code you are enumerating all the classes that implement the (conceptual) interface. This becomes frustrating when you have to add or remove implementations over time.

Your approach also interferes with managing dependencies since your class data is dependent on all the classes whereas using polymorphism makes it easier to limit the dependency on particular classes. Using the pimpl idiom mitigates this but I've always considered the pimpl idiom as kind of annoying (as you have two classes which have to stay in sync to represent one concept).

Using references or checked smart pointers seems like a simpler, cleaner solution. Other people are already commenting on those so I won't elaborate right now.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜