We’re looking for advisers across all disciplines to provide their expert opinions and insights. While traditional nonprofit, business, legal, and promotional input is always helpful, other backgrounds like manufacturing, architecture, and engineering are also important. Even if you’re not ready to commit to joining our advisory board or board of directors, just being able to reach out to experts in specific fields is invaluable.
Advisers are expected to be experts in one or more fields, which in most case means 8+ years as an experienced professional, like a manager or executive. Because we realize such experienced professionals often have limited time to spare, we keep communications to a minimum unless they also participate as an active member or on one of our boards. If you’re interested in becoming an adviser please fill out the Adviser Form or email us at registration@innovativeFuture.org.
We welcome anyone interested in participating, but with the Collaboration Tree (cTree) framework getting started we especially need people to help on the technical side. Anyone with programming experience should be able to help, but programmers with web development experience would be especially helpful. If you or anyone you know has programming experience and might like to participate please let us know or check out our developing site demo project on GitHub.
Makerspaces, often referred to as hackerspaces, are places where people can go to make virtually anything they want. They’re closely tied to the maker movement: a growing trend of people expressing their creativity by making things, epitomized by Make:, Etsy, and Maker Faire. Most makerspaces provide 3 key elements: professional grade machines, training sessions, and a community of creative people to collaborate with. To support themselves, makerspaces generally charge people membership fees to use their machines and facilities, as well as for advanced training sessions.
The biggest innovation of makerspaces is making extremely expensive machines available to average people, in some cases making more than a million dollars of machinery available at a single location. The training provided at most makerspaces is also extremely important, ensuring members know how to use the machines available, as well as turning them into a sort of ala carte fabrication trade school. What’s especially unique about makerspaces, as opposed to schools, is that they foster a community of creative people, from hobbyists to entrepreneurs, more interested in creating than learning a trade. People who’ve been part of these communities often say it’s inspiring to be part of such a creative environment and that they often get support and ideas from other members.
At first glance it may seem like you’d just find amateur tinkers and crafters at makerspaces, capable of creating little more than low quality novelty items, but real businesses are coming out of makerspaces. As with any business venture, most of these businesses aren’t wild successes, but the low cost of development means it’s easier for entrepreneurs to try lots of ideas until they’re successful. This is resulting in a new manufacturing renaissance, led by individuals instead of large corporations. Because it’s led by individuals who can’t compete with large companies, much of what’s being developed is completely unique, as opposed to improving on existing products.
Makerspaces are already extremely widespread, with Hackerspaces.org reporting over 1000 active spaces globally, including over 1/3 in the US alone. The 2 most popular makerspace chains are TechShop, a company with 8 large spaces in the US, and FabLab, a network of over 300 independently operated spaces worldwide. Makerspaces show no signs of slowing down, and as more entrepreneurs turn to them to jump start businesses they’ll continue to grow. This growth will increasingly drive improvement in supporting technologies like 3D printers and circuit prototyping, as well as encourage the development of new and innovative technologies.
In this first article of our new series, “Shaping our Future”, we’ll cover 3D printing and its potential impact. At first glance 3D printing might seem like little more than a novelty, but it has the potential to change the way we create most things in our lives. The term 3D printing is actually the popular term for the process of additive manufacturing, where three dimensional objects are created by adding or solidifying a substance one layer at a time. The process resembles printing, which is where the popular term comes from.
Today, most manufacturing involves creating a reference object, making a mold of the reference, then injecting a material into the mold to create the finished object. There are some significant limitations with this process, like difficulty creating complex shapes because the mold can’t be easily made or freed. All objects made with a mold will be exactly the same, so each variation needs a different mold. Also, the larger the object, the larger and more difficult it is to maneuver the mold, resulting in a practical limit to the size of solid objects.
With 3D printing the reference object is created in a computer, then a machine creates a physical object based on the shape of the digital object. One of the biggest benefits of 3D printing is that it can create extremely complex shapes, which might be impossible to create with a traditional molding process. Another big benefit is the object created can be completely unique, which means it can be completely customized. This has led to wide adoption of 3D printing for rapid prototyping, so new designs can be easily tested and modified.
We’re just starting to see some of the benefits of 3D printing, beyond its novelty factor. Because 3D printing can create complex shapes it’s been used to create parts which are stronger and more efficient. Researchers have also worked on printing larger object like parts for cars and planes. Even more extreme, 3D printing has actually been used to create the entire frame for houses. On the other end, 3D printing has reached nanometer object scale for commercial printers. At such a small scale it’s perfect for building meta materials and even printing human organs.
There’s still more work needed before 3D printing displaces traditional methods of manufacturing, but it’s improving every year. Speed is one hurdle 3D printing will need to clear in order to compete with traditional manufacturing, but there are promising technologies being developed to improve speed. Resolution isn’t much of an issue anymore with 3D printers, and it’s even possible to achieve resolutions of 20 microns on machines as low as $2500. As far as materials, plastic is still the most common material used by 3D printers, but advancements are being made with other materials like metal, ceramics, resin, water soluble supports, and even carbon fiber and nanotubes.
With all of the innovation around 3D printing it’s clear it will play a big role in the future of manufacturing. Some people are even calling 3D printing the third industrial revolution. Industrial or otherwise, 3D printing will be one of the things that shapes our future, sometimes in unexpected ways. For news about 3D printing 3ders.org is a great reference.
Computers have evolved from basic business machines to our windows to the world, and they show no sign of slowing down. This poses an interesting and challenging question: what will computers look like in the future and how will we interact with them? This isn’t a new question, and futurists and scientists love to fantasize about a future where computers are everywhere, making everything smarter. Maybe at some point computers will be like helpful bacteria, spread across every surface, providing us with virtually unlimited information and processing power, but that’s probably very far off and there’s no clear path toward a future like that. Instead, it might be more useful to look at how computers have evolved and suggest how they might continue to evolve into the future.
To know where you’re going, it’s often helpful to know where you started. Computers had a slow start as basic theories and research into mechanized number computation, going back as far as 1822. Things started to pick up during the 1940s as computers gained the fundamental skills of digital calculation and running programs, driven by academic and military interests. IBM introduced the first mass produced computers, which were large (room sized) machines targeted at big businesses. In 1976 Apple helped usher in the age of personal computing with what was essentially the first desktop computer, followed in 1981 by the IBM PC, although these were still largely business machines. Over time these personal computers evolved to meet the needs and interests of consumers, spurred on by technical advances and the mass adoption of the internet.
More recently we’ve seen a shift from fixed computing to portable computing, starting first with laptops and eventually evolving into smartphones and tablets. As the value of having access to computers has increased for people, the importance of always having access to them has increased. Now it seems we’ve entered a very frantic period, where the very definition of what makes a computer is being completely shaken up to see what form they’ll take going forward. The lines between desktops, laptops, tablets, phones, and soon wearables have become increasingly blurred, and it’s unclear what will remain once the dust settles.
From this evolution we can highlight a few key points which indicate the direction we’re headed in. Obviously people want access to any and all information as quickly as possible, which is why most people in developed nations now carry smartphones. People also want to carry as little as possible, which is why cell phones got down to almost the size of a matchbox before touchscreens became prevalent. In contrast, people want to view information and entertainment as large and in as high of a resolution as possible, which is one of the reasons phone sizes have grown from the original 3.5″ iPhone to over 6″ for some smartphones. The other reason for increasing phone sizes is because people want to interact with computers as quickly and easily as possible, hence the larger touchscreens. This need for quick input has also led people who need to enter more than small amounts of text to continue carrying around laptops or tablets with keyboard attachments. In addition, computers have gained peripherals like microphones and cameras which allow people to not only pull in information but to push information out.
It’s also important to look at some of the emerging technologies which will help computers continue to evolve. Docking laptops to show their content on larger monitors isn’t too new, but some companies have now created laptops and tablets which can have smartphones docked into them, increasing viewing and input ease while still essentially using the same device. Screeencasting is a much newer take on a similar idea: wirelessly mirroring or extending a phone or tablet screen to a TV or monitor. Even more disconnected, cloud computing is allowing people to essentially rent computing power, as well as store their data to a single place where all their devices can access it. There’s also been improvement in display technology, with transparent and even flexible displays, as well as technologies which can put displays directly in front of people’s eyes without obstructing their view.
Taking all of this into consideration, what does it mean for the future of computers? For now, we’ll probably continue to carry devices with us, at least until both displays and input can be done well without a touchscreen. Our smartphones will increasingly be the hub for all our information, wirelessly screencasting to larger screens when we want a bigger picture. Eventually cloud computing will turn our phones into just another screen and access point for our information, which will be maintained on a cloud server, probably with the more privacy concerned hosting their own clouds from small home servers.
The biggest hurdle to overcome in terms of freeing ourselves from handheld computers will be enabling easy input. Displaying images, video, and apps directly in front of our eyes will eliminate the need to carry around a bulky screen, but without a way to interact beyond voice commands it’ll probably be more of a novelty. Luckily researchers have already been working on technologies like gesture recognition and even brain wave input, which will eliminate the need for touchscreens. For people who need to do more precise work, roll-up keyboards and light pens could be used. A future where everyone wears glasses or digital display contact lenses (once that technology is improved) is probably unlikely, so many people will probably still prefer to carry small personal devices for connecting to information when another display surface isn’t available. That said, in most places where people spend significant amounts of time (home, work, transportation, shopping) there will probably be no shortage of displays to access content on.
Ultimate, computers will continue to evolve and fade into the background until we only see smart-displays. Most people will probably only carry one smart-display on a regular basis, like glasses, a watch, or a digital compact, based on what they consider the most beneficial with the least inconvenience. All the information currently held on computers will simply be available on whatever smart-display people use. This may sound like long-term science fiction, but all of this technology is already being researched or build, and it’s very likely it’ll be commonplace before 2030.
There are now more cities being built from scratch, and at a larger scale, than at any other time in human history. There are over a dozen cities either planned or in development, and over half a dozen at a massive scale in areas either mostly or completely undeveloped. The question is what does this mean for the future of our civilization?
Before we dig into that question, let’s go over some of the details about these massive undertakings. Many of the biggest projects started in the last 10 years and are scheduled for completion in the next 5 to 10 years. They all carry multi-billion dollar price tags, with most estimated at $20 billion+, up to $100 billion estimated for the Khazar Islands project (pictured above). Most of these projects are aiming for population sizes in the hundreds of thousands or even millions. For example, King Abdullah Economic City (KAEC) in Saudi Arabia is aims to support a population of 1.4 million people. Most of the cities will also be extremely technologically advanced, with cities like Songdo International Business District building trash removal and other services, and Masdar City building in passive climate control not just for buildings but for the spaces between buildings.
While these developments are extremely impressive, it’s important to keep in mind that this isn’t a new concept. While Saudi Arabia is building flashy new cities like KAEC and Kingdom City, they also built cities like Yanbu and Jubail as far back as 1975. One of the largest successful planned cities was Navi Mumbai in India, built in 1972, now with a population of 1.1 million people. Going back even further, Brasilia, now the federal capital of Brasil, was a planned city built in 1960. While Brasilia’s design is generally considered a failure due to problems like large stretches of unused land between different parts of the city, it’s grown to a population of 2.8 million. Even the US has seen planned cities, like Columbia, Maryland, founded 1967, which now has a population of nearly 100 thousand.
Getting back to the question at hand, how will the increase in number and scale of planned cities impact us? Some of the benefits are obvious, like a huge number of new, modern places to live, and carefully designed layouts as opposed to the haphazard layouts that can come from a city evolving over time. Many of these cities are also serving as test beds for new technologies which could potentially be integrated back into existing cities if they’re successful. While Masdar City in Abu Dhabi is on the smaller side, with a planned population of just 50 thousand, it’s working hard to become a hub for innovative energy technology.
There are also obvious problems when building entire cities with a preconceived plan, like potentially filling the city with features before you know how well they work for people or if there are problems with the way they’ve been designed or built. When you build a city from scratch you either have to think of everything before you start and execute on that design perfectly (which is impossible), or be resilient to change. It remains to be seen how resilient these new cities have been designed to be. Some of the more completed cities which have started allowing people to move in are also facing criticism with their population rates, finding it more difficult to attract people and businesses than anticipated.
With so many developments underway, and likely many more to come, it’s inevitable that we’ll see significant failures in at least some cities, which means huge monetary and environmental costs were paid where they may not have been needed. The poor design of Brasilia is a great example of this, with far more construction done than may have been required. Despite this shortcoming, the city has still thrived and grown to be larger than even Navi Mumbai, which is generally considered the largest successfully planned city to date, showcasing how even a failed city can evolve to be a success.
More than just a test bed for technology and residences for more people, the increasing trend of planned cities also offers other potential opportunities. While the current crop of planned cities appear to focus more on technological and architectural innovations, future planned cities could be perfect for testing new rules and social systems which might otherwise be very difficult to test in existing cities. In his 2009 TED talk, Paul Romer proposes we establish “charter cities” to do just that. As he points out, villages may not be too small to effectively test the benefits of new rules, but testing new rules in existing cities may be too disruptive because people wouldn’t be able to easily choose between living in a city with the new rules or not. Obviously building a planned city with a population of 1.4 million would be far too large (and expensive) for such an experiment, but building a smaller city with a plan for expansion would be a great opportunity to test new systems.
One problem with using planned cities as “charter cities” is their significant cost. Financing planned cities is so expensive and risky that it requires a massive payout to lure investors. Testing new rules and social systems only makes the risk higher and could drive away investors, leaving testing some rules and systems to philanthropic organizations. Going the philanthropic route, even if $100 million could be raised, would make it virtually impossible to build a sizable city using the building practices currently employed. This means that in order to test some social systems it may take a complete re-imagining of how planned cities are designed and built. This is what our organization is tasked with if we want to test large-scale social and technological innovations.
Links to other interesting planned cities in development:
Some people are excited for a future where robots can make more stuff more efficiently, driving down cost. Others are justifiably concerned this will lead to fewer jobs and fewer people able to buy what’s being produced. We live in a very carefully balanced financial ecosystem, where any significant changes to production or consumption threaten to destabilize everything.
I see the increased mechanization of production as accelerating and inevitable, short of international regulations to limit it, which means the discussion needs to shift from “is it acceptable” to “how do we adapt to it?” Due to the fragile nature of the financial ecosystem, it will be difficult, if not impossible, to transition gracefully (without significant human suffering) to this new paradigm where machines can increasingly do more of our work. This is one of the reasons I believe this organization is so important. Significant changes like this could destabilize existing societies and require entirely new, more resilient societies to be put in place in order to minimize suffering, but currently there aren’t any thorough blueprints for what those societies might look like.
Taking a step back, maybe not everyone buys into the argument that increasingly automating manufacturing is likely lead to human suffering, so let’s examine some possible outcomes. It’s relatively obvious at this point that most if not all manual labor can be handled by robotics, which means the only cases where machines might not displace laborers is where they’re paid a wage far less than the expense of building or operating a robot. As time goes on this bar will continue to drop and squeeze the living wage for laborers. In this scenario, either wages (including minimum wage) will need to continually drop, labor replaced by machines will need to be outlawed (adding shipping costs to the cost/benefit analysis unless globally adopted), or labor jobs will no longer be available. You could argue that only large companies would be able to replace laborers with machines, but if that’s true it will also mean the end of small companies, as they’ll increasingly fall behind on competitive pricing and quality (machines’ consistency only improves with time).
Extrapolating this scenario further, laborers will likely either need to learn white collar (creative or intellectual) jobs, putting extreme pressure on those jobs since laborers form the large base of the workforce pyramid. Either white collar jobs would start to see wages decrease too, or former laborers would simply fail to find work. As consumer purchasing power decreases due to decreased wages it will be met with decreased production costs in a race to the bottom. Obviously production costs will always exceed $0, while out of work consumers will have $0 of purchasing power. This will either lead to a steadily increasing number of people on welfare or will result in an explosion of crime and suffering if the increasingly wealthy business owners’ taxes aren’t proportionately increased to cover the welfare burden.
This obviously isn’t the only way things could play out, but it’s not an unlikely scenario if things continue on as they have. Some might argue that this means we should simply stop progress, but business optimization is built into the framework of our economic system, which is why it would take massive (and unpopular) government intervention to change it (ex. outlawing increasing use of robotics). This is why we need to examine other systems which may be tolerant to increased mechanization of labor. In fact, the decrease in available work caused by massive mechanization should be a blessing for society, as it would free up everyone to work less and enjoy life more. Unfortunately, we’ve done such a good job at consumerising everyone that if people had more free time they’d likely use it to try to make more money to buy more/better things.
This points out another possible area for government intervention: limit the workweek (ex. 18 hours) for all white collar jobs, except for vital jobs in understaffed areas (ex. doctors). This would initially give more breathing room to white collar jobs as laborers transition into them. Unfortunately this would turn the middle class into the lower class with virtually no way to rise up, since working longer would be prohibited. The only recourse would be to try to start new businesses, but without the capital to build a fleet of machines, these businesses would be subject to extremely disproportionate loans, with most profits taken by wealthy investors.
My goal isn’t to use these scenarios as scare tactics to sell my own proposed system, but rather to point out that simply patching of our existing system isn’t likely to be enough to overcome the challenges posed by increased mechanization. I don’t have all the answers, but I look forward to joining a thriving community discussing, designing, and building a better system. Our goal here should be to facilitate that community.
Create a better future for civilization through big innovation.
We’re committed to creating a better future for civilization through big innovation. Our goal is to serve the innovators of the world, to help them help us all. Whether you’re an architect, engineer, a weekend doodler, or just have an idea you think could make the world better, we want to help you polish your ideas and bring them to life.
Unlike other organizations, we’re focused on big ideas which can have big impacts. Instead of working on an idea to improve cars, we’d work on ideas for new types of transportation without cars; instead of trying to feed the homeless, we’ll investigate ideas for different systems where no one goes homeless. Big ideas like these may seem virtually impossible to achieve, but there are countless examples of humanity achieving what would have seemed impossible before it happened.
From building the great pyramid over 4000 years ago to building a half mile tall tower, from filling cities with tens of millions of people to people living 250 miles above the earth, humans have historically pushed the boundaries of our world and we’re only getting better at it. The great pyramid took between 10-20 years, but the Burj Khalifa tower, more than 5 times taller, took just 5 years. Organizations working on immediate issues are important, but it’s the big ideas which will truly shape our future.
A community discussion platform built for maximum insight with minimum oversight
The Collaboration Tree (cTree) is a new web platform being developed by our organization which will facilitate focused collaborative innovation. In other words, it will allow a large number of people to provide useful input for a focused discussion, while making it easy to extract the most useful input and take action on it. For our purposes we’ll be using the framework to discuss big ideas for improving civilization, but we’ll also make it freely available for use on other smaller projects. The source code will be freely available in the hopes that developers will expand and contribute back to the project and continually improve it.
The reason we call it a tree is because by default it will be organized using a branching structure, starting with the core goals and eventually expanding out to individual branches, discussing the implementation details. We believe this structure will be useful for everything from coming up with ideas for solving a problem to the exact specification of how to implement the idea. The goal is to end up with the optimal blueprint which accomplishes the core goals.
If you’re interested helping us develop or test the Collaboration Tree framework become a member or check out our demo site project on GitHub. There’s no cost to membership and you’ll be part of an exciting and innovative community.
To find answers to frequent questions about the Collaboration Tree check out our FAQ page.