I am not sure, but we must have 3 or 5 Web Services, at all. Software developers code for security and all other stuff. We don’t have a problem and don’t need any tool to manage Web Services.
The company that we signed a contract to develop a mobile application for our agency wants us to open them the necessary functionality as REST services. Almost all required functionality already exist in our legacy SOAP services. Will we have to rewrite all of them?
We consume an external API with a per-use charge, and many of our internal applications use it. Since the API serves sensitive personal information, we don’t want to give username/password to all developers, and we want to be able to see who accesses which information.
We started using Keycloak for SSO, and we will update all our Web Services so that they will take authentication tokens from Keycloak.
For an urgent integration project, my unit has to develop about 100 Web Services in one month, but I have only two personnel which none of them has developed a Web Service before!
We have some Web Services which consumes other stakeholder’s APIs/Web Services before responding to other parties.
We retreive the currency rates of the Central Bank daily, and record the values to local Postgres and Oracle databases for our internal applications. We planned to develop a simple program for that, but had no time to do. Is it possible to automate the process to avoid the effort and potential problems of manual …
We have some Web Services which consumes other stakeholder’s APIs/Web Services before responding to other parties.
We want to log all access details and message content of our Web Services to be able to find out which data we sent to whom, and when. Additionally, only authorized users must be able to see this data. We tried to log everything on our relational database, but soon the data became too big …
I am not sure, but we must have 3 or 5 Web Services, at all. Software developers code for security and all other stuff. We don’t have a problem and don’t need any tool to manage Web Services.
The institution created a new Web Service for every integration request, and the developers put a username/password pair within the code for simple authentication. No one knew how many Web Services the institution hosted, they were secure or not, who have been using them since when, for what kind of operations. Briefly, there was a huge problem of unknown.
We offered to install Apinizer API Gateway for PoC, and started to move the Web Services on to the API Gateway together with the developers since only they could help us to find the Web Services. Soon, we recognized that the number of Web Services was too far beyond they thought! We moved 42 services in the first stage.
After a few days, they stated that they realized the importance of a management tool when they saw the number of the Web Services and request/response traffic on the dashboard. Soon, they licensed Apinizer API Gateway, and some other Apinizer products.
The company that we signed a contract to develop a mobile application for our agency wants us to open them the necessary functionality as REST services. Almost all required functionality already exist in our legacy SOAP services. Will we have to rewrite all of them?
The agency contracted a company to develop a mobile application to make some services available to the citizens. Though most of the functionality required by the mobile application existed in legacy Web Services, contractor company needed REST endpoints. It was not likely to re-develop existing services as REST due to time and staff constraints.
We immediately opened up some of the SOAP Web Services as REST with the help of Protocol Transformation capabilities of API Gateway.We showed how to create Mock APIs for mobile application developers’ use until the actual endpoints are ready.Then we created three REST endpoints by DB-2-API in a few minutes.Finally, we configured a JWT authentication mechanism for all these APIs’ authentication, and added an IP restriction so that only client app’s server can access to these APIs.
Both developers and administrators were surprised and impressed to see that we completed all these stuff in half a day. It did not take too long for the agency to license Apinizer products. The mobile application is in use now, and the number of clients is increasing. System is continuosly under control with the help of the API Monitor and API Analytics capabilities. The Agency recently contacted us for a new license to scale the system.
We consume an external API with a per-use charge, and many of our internal applications use it. Since the API serves sensitive personal information, we don’t want to give username/password to all developers, and we want to be able to see who accesses which information.
There was a few problems with this case. First, each and every client application within the company used to copy the access information of the external API within their code. This was a serious problem in terms of security and management. Second, company was unable to detect any malicious usages of the information provided by the API.
We immediately opened up some of the SOAP Web Services as REST with the help of Protocol Transformation capabilities of API Gateway.We showed how to create Mock APIs for mobile application developers’ use until the actual endpoints are ready.Then we created three REST endpoints by DB-2-API in a few minutes.Finally, we configured a JWT authentication mechanism for all these APIs’ authentication, and added an IP restriction so that only client app’s server can access to these APIs.
After centrally handling the security and privacy of all the traffic to and from the external API easily without coding, the company became an Apinizer customer.
We started using Keycloak for SSO, and we will update all our Web Services so that they will take authentication tokens from Keycloak.
The number of Web Services was high, and the number of developers to do this was low. Therefore, it would take too much time to successfully update the code of all Web Services to take tokens from the Keycloak server.
In about 6 minutes, we configured one of the Web Services to take authentication token from Keycloak with the help of Apinizer API Gateway’s API Call policy. Then the staff applied the same policy to the other Web Services in about 1 minute per one.
Customer completed the job in an unexpectedly short time, beyond their estimations.
For an urgent integration project, my unit has to develop about 100 Web Services in one month, but I have only two personnel which none of them has developed a Web Service before!
It was a very common problem: “much work, little time, less resource”. Additionally, developers had no clue of Web Service development, handling security, failover, logging, or other stuff. In other words, it was a “much work, little time, no resource” problem.
We introduced Apinizer products to the project manager, and created a REST endpoint with DB-2-API in 1 minute. Then, we configured authentication, IP restriction, and SQL Injection filters in an additional 3 minutes. At the end of 5 minutes, one of the Web Services they had to develop was ready. Finally, we configured a monitor, in about 2 minutes, that will send an e-mail to the developers if the Web Service fails.
The agency immediately licensed Apinizer API Gateway, DB-2-API, and API Monitor. In 12 days, they put 87 Web Services in production with only one personnel who knows the database and SQL, but almost nothing about Web Services. The integration project is active now, and the management discusses new integration scenarios while telling their success story to other agencies’ managers!
We have some Web Services which consumes other stakeholder’s APIs/Web Services before responding to other parties. Sometimes, some of our Web Services stop running for some unknown reasons, and we have no clue about the situation until we start receiving phone calls or e-mails from our clients. This situation started to be a crisis in our administration level, and it became crucial to be aware of any error instantly. Additionally, we want to able to see the resource of the problem. We can fix the error if the problem is with our Web Service, or inform the stakeholders about the error to have it fixed.
It was obviously lackness of monitoring. Properly configured monitors could detect the problems and inform the related people instantly. Monitor logs would also help to show the source of the problem to others.
The team immediately created a few monitors, and started checking the integration points. We also configured actions to send e-mails to developers if any error occurs on any of the integration points. Soon, the first e-mail showed that an endpoint of a stakeholder’s API causes all the integration flow stop. The time of the error and the content of request was sent to the stakeholder, and they fixed the error of the endpoint.
Our customer creates at least one monitor (in some cases the number may increase) for each Web Service as a step of their development process now.
We retreive the currency rates of the Central Bank daily, and record the values to local Postgres and Oracle databases for our internal applications. We planned to develop a simple program for that, but had no time to do. Is it possible to automate the process to avoid the effort and potential problems of manual operations?
If it is possible to automate it, manuel operation is inefficient, error-prone, and annoying. The scenario in this case was one of the very common scenarios of automating a process spreading on multiple resources.
We created a job as a Task Flow in a few minutes, and showed how to automate such a process. First task was retreiving the currency rates from the HTTP address as an XML document, second and third tasks were mapping the values to INSERT statements for specified databases. Additionally, we configured an action to send an e-mail if the flow fails. Finally, we scheduled the job to be called at a certain time everyday.
The customer was impressed to see that it is just a few minutes to configure the job and put it in action. They immediately licensed API Integrator, and created new task flows for their similar jobs. Though some of these jobs have already been automated with some custom solutions, they did used API Integrator to become able to associate some actions to the jobs to be informed about the result, and keep the logs.
The company that we signed a contract to develop a mobile application for our agency wants us to open them the necessary functionality as REST services. Almost all required functionality already exist in our legacy SOAP services. Will we have to rewrite all of them?
Updating the existing application or the clients’ code were not an option. On the other hand, the query to gather the roles from the application’s database was too complex, and that would possibly create a performance issue.
A senior developer in the company managed to solve the problem with Apinizer’s features.First, using Apinizer DB-2-API he created an API that will return the roles of the given user.Then he configured Apinizer to cache the responses of this API. Since the roles of the users were not being modified often, a not very short invalidation period for the cache was enough to increase the performance of the API even though the query behind the API was not so fast.Next step was creating an API Call Policy to enrich the original request of any authenticated client with the roles provided by this new API for the client. Finally, backend APIs were able to take the roles of the client from the header of the messages.
A complex requirement was handled by the customer elitely without coding. The scenario was really inspiring even for the Apinizer Team
We want to log all access details and message content of our Web Services to be able to find out which data we sent to whom, and when. Additionally, only authorized users must be able to see this data. We tried to log everything on our relational database, but soon the data became too big to manage or query. Since it will not be possible to filter the logs, we decided not to use txt files for logging, either. What can we do?
Partly for legal reasons, the institute wanted to log all access details and content of Web Services, and be able to filter these logs to find out some specific data. The potentially huge size of data was a big problem in terms of performance of both logging and querying. Restricted access on this log data was another topic to be considered.
We explained that Apinizer keeps logs in Elasticsearch, and it is scalable so that even full-text search on log content is possible. There is no need to be experienced users on Elasticsearch, because all log configuration and query definitions are done via Apinizer’s form-based user interfaces, agnostic to Elasticsearch. Additionally, customizable logging, querying, reporting and visualization capabilities of Apinizer were also impressive for the institute’s personnel. We showed that Apinizer can backup the log data as well, and all these stuff can be done by authorized users only. After we configured a few APIs, produced load on those APIs to create log content, and built up example queries, features were clear enough to the staff. Finally, we showed them how to handle privacy in log data for sensitive information.
After seeing Apinizer in action and how easy it is to log APIs’ traffic, and filter, report and visualize the logged data, the institute decided to manage all APIs’ logs via Apinizer. As a case study, they created a custom query that will filter all requests/responses to/from selected APIs from a specific IP, and defined a report that will periodically send the results of the query to specified e-mail addresses. We helped them to configure Apinizer to backup log data in a managed shared folder and clear log data from active log database monthly for performance issues. The institute manages all APIs and their logs via Apinizer now.