Harness FME SDKs and Customer-Deployed Components
Overview
Client-side SDKs
Client-side SDK Suites
Client-side RUM Agents
Server-side SDKs
Optional Infrastructure
To find out what's new or updated in any Harness FME SDK release, visit the SDK's GitHub repository. The CHANGES.txt
file provides detailed information about updates and the dates they were made.
When you integrate FME SDKs, consider the following to make sure that you have the correct set up depending on your use case, customers, security considerations, and architecture.
- Understand Harness FME architecture. FME SDKs were built to be scalable, reliable, fast, independent, and secure.
- Determine which SDK type. Depending on your use case and your application stack, you may need a server-side or client-side SDK.
- Understand security considerations. Client- and server-side SDKs have different security considerations when managing and targeting using your customers' PII.
- Determine which API key. In Harness FME, there are three types of authorization keys with each providing different levels of access to Harness FME's API. Understand what each key provides access to and when to use each API key.
- Determine which SDK language. FME supports several SDKs across various languages. With Harness FME, you can use multiple SDKs if your product is comprised of applications written in multiple languages.
- Determine if you need to use the Split Synchronizer & Proxy. By default, FME SDKs keep segment and feature flag definitions synchronized as users navigate across disparate systems, treatments, and conditions. However, some languages do not have a native capability to keep a shared local cache of this data to properly serve treatments. For these cases, we built the Split Synchronizer. To learn more, refer to the Split Synchronizer guide.
Streaming architecture
FME SDKs were built to be scalable, reliable, fast, independent, and secure.
- Scalable. Harness FME is currently serving more than 50 billion feature flag evaluations per day. If you've shopped online, purchased an airline ticket, or received a text message from service provider, you've likely experienced Harness FME.
- Reliable and fast. Our scalable and flexible architecture uses a dual-layer CDN to serve feature flags anywhere in the world in less than 200 ms. In most instances, FME rollout plan updates are streamed to FME SDKs, which takes a fraction of a second. In less than 10% of cases, for very large feature flag definitions (or large dynamic configs) or segment updates with a large number of key changes, a notification of the change is streamed and the changes are retrieved by an API fetch request. Our SDKs store the FME rollout plan locally to serve feature flags without a network call and without interruption in the event of a network outage.
- Independent with no Harness FME dependency. Harness FME ships the evaluation engine to each SDK creating a weak dependency with Harness FME's backend and increasing both speed and reliability. There are no network calls to Harness FME servers to decide a user's treatment.
- Secure with no PII required. No customer data needs to be sent through the cloud to Harness FME. Use customer data in your feature flag evaluations without exposing this data to third parties.
Streaming versus polling
FME updates can be streamed to FME SDKs sub-second or retrieved on configurable polling intervals.
When streaming, Harness FME utilizes server-sent events (SSE) to notify FME SDKs when a feature flag definition is updated, a segment definition is updated, or a feature flag is killed. For feature flag and segment definition updates, the SDK reacts to this notification and fetches the latest feature flag definition or segment definition. When a feature flag is killed, the notification triggers a kill event immediately. When the SDK is running with streaming enabled, your updates take effect in milliseconds.
Enable streaming when it is important to:
- Reduce network traffic caused by frequent polling
- Propagate Harness FME updates to every customer and/or service in real-time
When polling, the SDK asks the server for updates on configurable polling intervals. Each request is optimized to fetch delta changes resulting in small payload sizes.
Utilize polling when it is important to:
- Maintain a lower memory footprint. Each streaming connection is treated as an independent request
- Support environments with unreliable connectivity such as mobile networks. Mobile environments benefit from a low-frequency polling architecture
- Maintain robust security practices. Maintaining an always-open streaming connection poses risk
- Maintain control over frequency and when to initiate a network call
- .NET 6.1.0
- Android 2.6.0
- Browser 0.1.0
- Go 5.2.0
- iOS 2.7.0
- Java 4.0.0
- JavaScript 10.12.0
- Node.js 10.12.0
- React 1.2.0
- React Native 0.0.1
- Redux 1.2.0
- Ruby 7.1.0
- Python: 8.3.0
SDK types
Our supported SDKs fall into two categories:
Type | Overview |
---|---|
Client-side |
|
Server-side |
|
Security considerations
Client- and server-side SDKs have different security considerations:
Type | Security Considerations |
---|---|
Client-side |
|
Server-side |
|
Block traffic until the SDK is ready
When the SDK is instantiated, it begins background tasks to update an in-memory cache with data fetched from Harness servers. Depending on the size of the data, this initialization process can take up to a few hundred milliseconds.
During this intermediate state, if the SDK is asked to evaluate which treatment to show for a specific feature flag, it may not yet have the necessary data to make an accurate evaluation. In such cases, the SDK does not fail, but instead returns the Control treatment.
To avoid serving the Control treatment prematurely, you can block traffic until the SDK is fully ready. This is best done as part of your application's startup sequence to ensure that feature flag evaluations are accurate before serving users.
For example:
```ruby
require 'splitclient-rb'
options = { block_until_ready: 10 }
begin
split_factory = SplitIoClient::SplitFactoryBuilder.build("YOUR_API_KEY", options)
split_client = split_factory.client
rescue SplitIoClient::SDKBlockerTimeoutExpiredException
puts "SDK failed to initialize within the requested time."
end
This code waits up to 10 seconds for the SDK to initialize. If the SDK is not ready within that time, an exception is raised, allowing you to handle the failure accordingly.
Using FME SDKs in Serverless environments
In serverless environments, data persistence is best handled by externalizing state to avoid the performance impact of "cold starts", or when functions initialize and load data before they can execute. This is especially important for feature flagging SDKs like FME, which rely on cached data to perform evaluations efficiently.
To achieve optimal performance when using FME within AWS Lambda functions or other serverless platforms, see the Serverless Applications Powered by FME Feature Flags blog post for practical examples and best practices.
API keys
Typically, you need one API key per Harness FME environment, and additionally, you may want to issue extra API keys per microservice of your product using Harness FME for better security isolation. You must identify which type of SDK you're using to ensure you select the appropriate API key type.
In practice, you need only a single client-side and a single server-side SDK API key for each Harness FME environment. When an environment is created, FME automatically creates one key of each type for the new environment.
There is nothing wrong with having multiple keys of the same type for the same environment, but there is no real reason to do so because FME does not track which API key is used.
A client-side SDK (like JavaScript, iOS, or Android) should be initialized with a client-side SDK API key. A server-side SDK (like Go, Java, .NET, etc.) should be initialized with a server-side SDK API key. The main difference between the access provided to client-side SDKs using a client-side SDK API key and server-side SDKs using a server-side SDK API key is the way they retrieve information about segments.
The client-side SDKs hit the endpoint /memberships
, which only returns the segments containing the ID used to initialize the SDK. The server-side SDKs call /segmentChanges
, which downloads the entire contents of every segment in the environment. This way, the server-side SDKs can compute treatments for any possible ID, while the client-side SDKs minimize space overhead for browsers and mobile devices by downloading only the segment information needed to process getTreatment
calls for the ID specified during initialization.
Within Harness FME, the following three types of keys each provide different levels of access to Harness FME's API:
Type | Overview |
---|---|
Server-side |
|
Client-side |
|
Admin |
|
RUM Agents
Harness FME real user monitoring (RUM) agents collect detailed information about your users' experience when they visit your application. This information is used to analyze site impact, measure the degradation of performance metrics in relation to feature flag changes and alert the owner of the feature flag about such degradation.
For more information, see Client-side Agents.
Evaluator service
For languages with no native SDK support, Harness FME offers the Split Evaluator, a small service capable of evaluating all available features for a given customer via a REST endpoint. This service is available as a Docker container for ease of installation and is compatible with popular framework like Kubernetes when it comes to supporting standard health checks to achieve reliable uptimes. Learn more about the Split evaluator.
Synchronizer service
By default, FME SDKs keep segment and feature flag definitions synchronized in an in-memory cache for speed at evaluating feature flags. However, some languages do not have a native capability to keep a shared local cache of this data to properly serve treatments. For these cases, we built Split Synchronizer to maintain an external cache like Redis. To learn more, read about Split Synchronizer.
Proxy service
Split Proxy enables you to deploy a service in your own infrastructure that behaves like Harness servers and is used by both server-side and client-side SDKs to synchronize the flags without directly connecting to Harness FME's backend.
This tool reduces connection latencies between the SDKs and the Harness server, and can be used when a single connection is required from a private network to the outside for security reasons. To learn more, read about Split Proxy.
Using a service for feature flags
Harness FME enables you to roll out features and experiment with target groups of customers across the full web stack, from deep backend services to client-facing JavaScript and mobile applications.
Feature flagging is especially valuable in mobile environments. For instance, when a critical bug appears in a newly released mobile feature, you can’t push an immediate fix due to App Store approval delays, and customers cannot be forced to update their apps promptly.
Many mobile and IoT apps are optimized for resource-constrained devices. A feature flagging solution should minimize any impact on app size or performance.
Harness FME provides per-language libraries (e.g., .NET, Java, Node, PHP, Python, Ruby, Go) for backend environments, iOS and Android SDKs for mobile, and a JavaScript SDK with first-class support for React and Redux. For unsupported languages, we recommend wrapping one of our server-side SDKs inside a small service hosted on your infrastructure.
This “phone home” approach—where browser, mobile, or IoT clients query a centralized service at startup to retrieve their feature flag state—offers several benefits, including the following:
-
Uniform experience across devices, versions, and platforms
Ensures consistent feature flag states for users, whether they access your product via mobile apps or the web. -
Update your platform independently from app releases
By hosting FME server-side, you can upgrade SDK versions centrally without requiring users to update their mobile or IoT apps. -
Leverage richer data for flag evaluation
Server-side evaluation can incorporate user data not available on the client—such as demographics or model outputs—without exposing sensitive data on devices or browsers. -
No impact on app size or performance
Hosting the SDK on the server eliminates the need to embed additional libraries in client apps, preserving lightweight and performant clients.
Best practices for designing the feature flag service
Harness FME offers the Evaluator as a ready-made server-side solution for evaluating feature flags and supporting languages without native SDKs.
The client app should query the service to retrieve a mapping of feature names to treatments for the current user. Assuming the service is deployed at /splits
, a recommended REST API design is:
@GET
/splits/{customer_id}?dimension_1={dimension_1_value}&dimension_2={dimension2_value}....
For example:
/splits/4915?connection_speed=3G&country=usa&device_type=android....
customer_id
: Unique identifier for the user. Ideally shared between web and mobile to ensure consistent treatments.dimension_n
: Optional user or device attributes the service should consider during evaluation (e.g., location, connection speed, device type).
We recommend securing this API with HTTPS due to potentially sensitive query parameters.
Response schema
The API returns a list of feature-treatment mappings:
[
{
"featureName": "string",
"treatment": "string"
}
]
The API always returns HTTP 200. In failure cases, it returns an empty list.
If a feature is not present in the response, the client should treat it as having the control
treatment, which indicates an evaluation problem. Clients must handle the control
case gracefully.
Example treatment usage in Java:
String treatment = ...; // retrieved from the response map
if ("on".equals(treatment)) {
// Enable feature
} else if ("off".equals(treatment)) {
// Disable feature
} else {
// Handle 'control' or unknown treatment
}
If your app treats control
as off
, you can simplify to:
if ("on".equals(treatment)) {
// Enable feature
} else {
// Disable feature
}
Server-side example (Java pseudo-code):
@Path("/splits")
public class SplitServer {
private SplitManager _manager;
private SplitClient _client;
@Inject
public SplitServer(SplitFactory factory) {
_manager = factory.manager();
_client = factory.client();
}
@Path("{customer_id}")
@GET
public Response evaluateAllFeatures(@PathParam("customer_id") id,
@Context UriInfo uriInfo) {
Map<String, Object> attributes = uriInfo.getQueryParams();
List<Map<String, String>> result = new ArrayList();
for (Split split : _manager.splits()) {
String t = _client.getTreatment(id, split.name(), attributes);
Map<String, String> m = new HashMap();
m.put("featureName", split.name());
m.put("treatment", t);
result.add(m);
}
return Response.ok(result);
}
}